added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2019-01-22T22:33:29.972Z
|
2018-03-01T00:00:00.000
|
54705226
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "99fafdac3550bd712f9edee0f51e9b0cfdbdf40c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:690",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "67c4821a286f0aad6874b41a5c9ac1200a2825c8",
"year": 2018
}
|
pes2o/s2orc
|
The Association of rs1670533 Polymorphism in RNF212 Gene With the Risk of Down Syndrome in Young Women.
Objective: To evaluate association between polymorphism rs1670533 in RNF212 gene with the risk of Down syndrome in young women. Materials and methods: In a case control study, one hundred pregnant women were evaluated in both group. The case group consisted pregnancy with diagnosis of Down syndrome in women younger than 35 years old. The control group consisted pregnancy with normal neonate. Fifty pregnant women in each group were allocated.one hundred blood samples were collected. Genomic DNA was extracted by salting - out method and polymorphism of rs1670533 were detected by PCR.PCR products were detected on 2% agarose gel electrophoresis. Results: The TTrs1670533 haplotype was present in 36% of pregnant women with Down syndrome versus 14% of normal pregnant women, (p = 0.003 e-12; CI 95%1.665-5.305, OR = 3.107); TC haplotype was present in 56% of normal pregnancy regarding of16% of pregnancy with Down syndrome (p = 4.288 e = 12; CI 95%: 0.145-0.25; OR = 0.126). Conclusion: It seems that TTrs1670533 haplotype is a risk factor for pregnancy with Down syndrome in young women and TC haplotype has protective effect.
Introduction 1
Down syndrome or triosomy 21 is the leading cause of chromosomal abnormality in human which is caused mental retardation (1). The extra chromosome is the result of nondisjunction in meiotic division and the origin is from maternal chromosome (2), the age Correspondence: Fatmeh Davari-Tanha, Department of OBS & GYN, Yas Hospital, North Nejatollahi, Tehran, Iran. Email: fatedavtanha@gmail.com of mother at the time of conception is a well-known risk factor for nondisjunction and Down syndrome (3) although there are many young women who have Down syndrome children (4). There is a long time controversy about physiologic aging or chronological aging; it means that when a young ovary loss its primordical follicle pool, it results to nondisjunction at oogenesis like the times that an old age female and ovary is prone to nondisjunction at meiosis. There are many ideas about constitutive and age related changes in hormonal environment and in oocyte and follicles, which are responsible for failure in segregation of homologues and sister chromatids at meiosis (5).
Delicate chromosome segregation during meiosis is designed by crossing -over (6) and each of sister chromatid obtains at least one crossover, although a lot of recombination sites yield non-crossovers (7). A strong regulatory mechanism of crossing-over is RNF212, which is associated with alteration in crossover rates in humans (8). The mouse RNF212 is necessary for crossing-over, it regulates mechanism that couple chromosome synapsis for formation of crossover-specific recombination complexes (9). Selective localization of RNF212 to a group of recombination loci is suggested to be a fundamental early action in the crossover designation reaction (10). The responsibility of RNF212 is stabilizing these sites for meiosis-specific recombination factors, including the MutSγ complex (MSH4-MSH5) (11). Selective stabilization of crucial recombination proteins is a essential feature of meiotic crossover regulation. Haplo insufficiency indicates that RNF212 is a confining factor for crossover control and raises the possibility that human alleles may alter the amount or stability of RNF212 and be risk factors for aneuploid situations (12).
It is showed that both male and female Rnf212 -/mice have normal phenotype but were sterile and reduced testis size and absence of post-anaphase I cells was occurred in male Rnf212 null. Also Rnf212 -/-ovaries had normal size to those of wild type animals, and the numbers of oocytes were similar to mature animals (13). Apparently normal pachytene was occurred in sperm and oocyte nuclei of Rnf212 null mouse and fully synapsed autosomes. Although, X-Y synapsis was destabilized and crossover complexes were absent in Rnf212 -/-spermatocytes. It is suggested that Rnf212 was necessary for stabilize the meiosis-specific factors Msh4 (602105) and Tex11 (300311) (14).
Two SNPs within the RNF212 gene, rs3796619 (612041.0001) and rs1670533 (612041.0002), are found (8),that are related with inverse recombination rates in men and women. Haplotype TC was associated with high female recombination rate and low male recombination (15).
At this study we aimed to evaluate the effects of polymorphism rs1670533 in RNF212 gene on the incidence of pregnancy in young age women who have Down syndrome baby and comparing them with pregnancy with healthy baby.
Materials and methods
This is a case control study which is included 50 pregnant women with Down syndrome neonate as case group, and 50 pregnant women with healthy neonate as control group. Case and control mothers were recruited from prenatal clinic of a tertiary university-based hospital. An informed consent was obtained from all participants mothers. The ethic committee of Tehran university of medical sciences approved this project and the ethical code was 21626. The inclusion criteria was age < 35 and pregnancy with Down syndrome which was approved by karyotype.
5-mL whole blood samples were collected from the caes and control groups and placed into special tubes containing ethylenediaminetetraacetic acid anticoagulants material. Immediately after collection, all of the samples were stored at -20°C until use. Genomic DNA was extracted from whole blood using the salting-out method. Specific primers were designed for the TCrs1670533 and TTrs1670533 genes and Gap-polymerase chain reaction (PCR) was performed for both of the genes. The sequences of primers are shown in Table 1. The PCR reaction was carried out T1 at 0.2 micro tubes with a total volume of 15 μL containing 7.5 μL Master Mix Red (Ampliqon), 1.5 μL of TCrs1670533 and TTrs1670533 primer pairs (10 pmol of each primer), 0.8 μL of Beta-globin primers (10 pmol of each primer), 3.5 μL of sample DNA (100 ng) and 1.7 μL double distilled water. The PCR reaction was performed in 30 cycles with the following conditions. The reaction mixture for TCrs1670533 was first subjected to initial denaturation at 95°C for 5 min; 30 cycles consisting of denaturation at 94°C for 30 s, primer annealing at 59°C for 40 s and DNA extension at 72°C for 30 s; the final DNA extension was at 72°C for 3 min and the amplification conditions for TTrs1670533 were initial denaturation at 94°C for 5 min followed by 30 cycles of denaturation at 94°C for 1 min, annealing at 61°C for 45 sec, extension at 72°C for 1 min, and final extension at 72°C for 7 min. The products of the PCR amplification (TC,TT, and β-gol, 102 bp) were then subjected F1 to electrophoresis on a 2% agarose gel and visualized by ethidium bromide. β-globin (Cinagen) was used as internal control in this study (Table 1).
Data were analyzed with SPSS 19 for Windows. The χ 2 and Fischer's exact test were used to compare variables between groups. p < 0.05 was considered statistically significant. The odds ratios (OR) with Vol. 12
Results
There are one hundred women in this survey aged (18-35) years old. At birth of their Down syndrome child, 50 women in case group were under 35 years old. The results of this study showed that the haplotype TCrs1670533 was the most frequent type in control group (56% in control group versus 14% in case group); the haplotype TTrs1670533 was more common in the case group (36%) versus control group (16%). The difference was statistically significant p = 0.003. The odd ratio for having a Down syndrome child during pregnancy in age < 35 years old is 2.722 for women who have TT haplotype (p = 1.41 e = 10 OR = 2.722). The odd ratio for having a healthy child during pregnancy in age < 35 years old women with TC haplotype is 0.152; in the other hand the TC allele has protective effects regarding Down syndrome regarding to the TT allele which is a risk factor for having Down syndrome child (Table 2).
Discussion
The results of this study showed that the risk of having child with Down syndrome in women younger than 35 years old is associated with TT haplotype rs1670533. Conversely the TC haplotype rs1670533 has protective effect regarding Down syndrome. The haplotype TT has 2.7 folds risk for having child with Down syndrome but TC haplotype has protective effect regarding Down syndrome.
The association between the C677T allele and maternal risk of having Down syndrome child in Jordan was evaluated (16). They evaluated the frequencies of MTHFR C677T and A1298C polymorphisms in their country. Their results showed that the mutant variant677T is associated (x2 = 6.93, p = 0.008) with all groups of case mothers, although it was not statistically significant association except for young women (OR = 4.2 95% CI: 1.61-10.97, p = 0.003). This study supposed that allele 677T play a crucial role of delivering child affected by Down syndrome in TT (homozygous) and AT (heterozygous) states (OR = 10.35, p = 0.000) (16).
The pairing, synapsis and segregation of homologous parental chromosomes (homologs) are specific features of the meiotic program. Homologous recombination has essential roles in this concert (17). Recognition of homology and DNA strand exchange promote the pairing of homologs and their delicate connection by zipper-like structures called synaptonemal complexes (18). Finally, a subset of recombination loci create crossovers, causing in stable inter homolog connections that promote homolog bi-orientation on the spindle to promote exacte disjunction at meiosis I (19). Failure to crossover or the suboptimal location of crossovers (proximal to centromeres or telomeres) is responsible for missegration of homologs. In humans, aneuploidy is the result of meiotic errors that is a leading cause of spontaneous abortion and developmental disease (20).
This study showes RNF212 as an crucial crossover factor during mammalian meiosis and provides a new idea for the molecular interactions that underlie the differentiation of crossover and noncrossover recombination. How and when specific recombination sites are designated as having a crossover fate is unknown (21).
There are a high affinity for RNF212 binding to synaptonemal complex central region that tends to outcompete binding to MutSγ-associated recombination sites (22). Authors cloned full-length mouse Rnf212. The deduced 307-amino acid protein has an N-terminal ring finger domain, followed by a coiled-coil domain and a C-terminal serine-rich domain. The ring finger domain is characteristic of E3 ligase enzymes that catalyze protein modification by ubiquitin-like molecules (23). The human RNF212 protein shares significant identity with the full-length mouse protein and has a similar domain structure. They also showed two splice variants of mouse Rnf212 that encode C-terminally truncated proteins of 133 and 52 amino acids (24). Immunohistochemical analysis of mouse spermatocyte and oocyte nuclei revealed dynamic localization of Rnf212 to synaptonemal complexes, including pseudoautosomal regions of X-Y chromosomes. Rnf212 localized more weakly to DNA double-strand break sites (25).
Conclusion
According to the results of this study, It seems that TTrs1670533 haplotype is a risk factor for pregnancy with Down syndrome in young women and TC haplotype has protective effect.
Conflict of Interests
Authors have no conflict of interests.
|
v3-fos-license
|
2018-07-21T22:29:41.454Z
|
2012-02-15T00:00:00.000
|
51828855
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.5772/30504",
"pdf_hash": "837214d4127047da3075f0d56aff46b822f66511",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:691",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"sha1": "78c6efd003b21e6d6149d852744ab559a028a25c",
"year": 2012
}
|
pes2o/s2orc
|
Data Mining Techniques in the Diagnosis of Tuberculosis
Data mining is the knowledge discovery process which helps in extracting interesting patterns from large amount of data. With the amount of data doubling every three years, data mining is becoming an increasingly important tool to transform these data into information. It is commonly used in a wide range of profiling practices, such as marketing, surveillance, fraud detection, medical and scientific discovery (J.Han & M.Kamber,2006).
Selection of data mining / knowledge discovery in database
The third step is data mining that extracts patterns and models hidden in data.This is an essential process where intelligent methods are applied in order to extract data patterns.In this step we have to first select data mining tasks and then data mining method.The major classes of data mining methods are predictive modeling such as classification and regression; segmentation (clustering) and association rules which are explained in detail in the next section.
Interpretation and evaluation of results
The fourth step is to interpret (post-process) discovered knowledge, especially the interpretation in terms of description and prediction which is the two primary goals of discovery system in practice.Experiments show that discovered patterns or models from data are not always of interest or direct use, and the KDD process is necessarily iterative with judgement of discovered knowledge.One standard way to evaluate induced rules is to divide the data into two sets, training on the first set and testing on the second.One can repeat this process a number of times with different splits, and then average the results to estimate the rules performance.
Using discovered knowledge
The final step is to put the discovered knowledge in practical use.Putting the results in practical use is certainly the ultimate goal of the knowledge discovery.The information achieved can be used later to explain current or historical phenomenon, predict the future, and help decision-makers make policy from the existed facts (ho, nd).
Data mining tasks and functionalities
Data Mining functionalities are specifically of two categories: descriptive data mining and predictive data mining.Descriptive methods find human-interpretable patterns that describe the data.Predictive methods perform inference on the current data in order to make predictions (J.Han & M.Kamber, 2006).
The predictive tasks of data mining are:
Classification -Arranges the data into predefined groups.For example an email program might attempt to classify an email as legitimate or spam.Common algorithms include Decision Tree Learning, Nearest neighbor, Naive Bayesian classification and Neural Network.
Regression -Attempts to find a function which models the data with the least error.
The descriptive tasks of data mining are: Association rule learning -Searches for relationships between variables.For example a supermarket might gather data on customer purchasing habits.Using association rule learning, the supermarket can determine which products are frequently bought together and use this information for marketing purposes.This is sometimes referred to as "market basket analysis". Clustering -Is like classification but the groups are not predefined, so the algorithm will try to group similar items together.
Data mining finds its applications in various fields.Data mining draws ideas from many fields such as Machine learning/Artificial Intelligence, Pattern Recognition, Statistics, and Database Systems.In recent years, data mining has been widely used in the area of genetics, medicine, bioinformatics with its applications applied to biomedical data as facilitated by domain ontologies and mining clinical trial data which is also called medical data mining.
Different types of medical data are now available on the web, where DM algorithms and applications can be applied, helping in easy diagnosis.Efficient and scalable algorithms can be implemented both in sequential and parallel mode thus improving the performance.Such type of mining is called medical data mining.
Medical data mining
In recent years, data mining has been widely used in the area of genetics and medicine, called medical data mining.In the past two decades we have witnessed revolutionary changes in biomedical research and bio-technology.There is an explosive growth of biomedical data, ranging from those collected in pharmaceutical studies and cancer therapy investigations to those identified in genomics and proteomics research.The rapid progress of biotechnology and bio-data analysis methods has led to the emergence and fast growth of a promising new field: Bioinformatics.On the other hand, recent progress in data mining research has led to the developments of numerous efficient and scalable methods for mining interesting patterns and knowledge in large databases, ranging from efficient classification methods to clustering, outlier analysis, frequent, sequential and structured pattern analysis methods, and visualization and spatial/temporal data analysis tools.The question becomes how to bridge the two fields, Data Mining and Bioinformatics, for successful data mining in biomedical data.Especially, we should analyze how data mining may help efficient and effective bio-medical data analysis and outline some research problems that may motivate the further developments of powerful data mining tools for bio data or medical data analysis.
Data mining is a process that involves aggregating raw data stored in a database and analyzing them to identify trends, patterns and anomalies.Medical data mining is an active research area under data mining since medical databases have accumulated large quantities of information about patients and their clinical conditions.Relationships and patterns hidden in this data can provide new medical knowledge as has been proved in a number of medical data mining applications.A Doctor quickly swung into action after a renowned pharmaceutical company in the USA announced in 2001 that it was withdrawing a cholesterol-lowering drug following the deaths of more than 30 people.Using his medical records database, his staff was able to identify all patients taking the cholesterol-lowering drug and notify them within 24 hours of the announcement.What the doctor did is technically known as Data Mining.Very few doctors, however, were able to act on the situation, because they did not have accessible raw data in the electronic format.
Not only does disciplined storage of medical data helps the physicians and healthcare institutions, but it also helps pharmaceutical companies to mine the data to see the trends in diseases.It also helps prioritize product development and clinical trials based on the accurate demands visible from the data that is mined.
Various data mining tasks can be applied on different diseases data set.This helps even the doctor to identify hidden associations between various symptoms.Research has been carried out on gene data, proteonomic data and attributes related to diseases covering even risk factors.Prediction of diseases has also been done on scanned images leading to medical imaging, which is the fastest growing area.Lot of Research has been carried out leading to breast cancer, liver diseases and other types of cancer and also diseases related to heart.There are very few articles related to Tuberculosis.
Tuberculosis
Tuberculosis (TB) is a common and often deadly infectious disease caused by mycobacterium; in humans it is mainly Mycobacterium tuberculosis.It usually spreads through the air and attacks low immune bodies such as patients with Human Immunodeficiency Virus (HIV).It is a disease which can affect virtually all organs, not sparing even the relatively inaccessible sites.The microorganisms usually enter the body by inhalation through the lungs.They spread from the initial location in the lungs to other parts of the body via the blood stream.They present a diagnostic dilemma even for physicians with a great deal of experience in this disease.Hence Tuberculosis (TB) is a contagious bacterial disease caused by mycobacterium which affects usually lungs and is often co-infected with HIV/AIDS.
It is a great problem for most developing countries because of the low diagnosis and treatment opportunities.Tuberculosis has the highest mortality level among the diseases caused by a single type of microorganism.Thus, tuberculosis is a great health concern all over the world, and in India as well (wikipedia.org).
Symptoms of TB depend on where in the body the TB bacteria are growing.TB bacteria usually grow in the lungs.TB in the lungs may cause symptoms such as a bad cough that lasts 3 weeks or longer pain in the chest coughing up blood or sputum.Other symptoms of active TB disease are: weakness or fatigue, weight loss, no appetite, chills, fever and sweating at night.
Although common and deadly in the third world, Tuberculosis was almost non-existent in the developed world, but has been making a recent resurgence.Certain drug-resistant strains are emerging and people with immune suppression such as AIDS or poor health are becoming carriers.
Data set description
The medical dataset we are using includes 700 real records of patients suffering from TB obtained from a city hospital.The entire dataset is put in one file having many records.Each record corresponds to most relevant information of one patient.Initial queries by doctor as symptoms and some required test details of patients have been considered as main attributes.Totally there are 12 attributes (symptoms) and last attribute is considered as class in case of Associative Classification.The symptoms of each patient such as age, chronic cough(weeks), loss of weight, intermittent fever(days), night sweats, Sputum, Bloodcough, chestpain, HIV, radiographic findings, wheezing and TBtype are considered as attributes.
Table 1 shows names of 12 attributes considered along with their Data Types (DT).Type Nindicates numerical and C is categorical.
Association Rule Mining
Association Rule Mining (ARM) is an important problem in the rapidly growing field called data mining and knowledge discovery in databases (KDD).The task of association rule mining is to mine a set of highly correlated attributes/features shared among a large number of records in a given database.For example, consider the sales database of a bookstore, where the records represent customers and the attributes represent books.The mined patterns are the set of books most frequently bought together by the customer.An example could be that, 60% of the people who buy Design and Analysis of Algorithms also buy Data Structure.The store can use this knowledge for promotions, self-placement etc.There are many application areas for association rule mining techniques, which include catalog design, store layout, customer segmentation, telecommunication alarm diagnosis and so on.1. List of Attributes and their Datatypes
Definition of association rule
Here we give the classical definition of association rules.Let { t 1 , t 2 ,…..t n } be a set of transactions and let I be a set of items, I={ I 1 ,I 2 ,….I m }.An association rule is an implication of the form XY, where X, Y are disjoint subsets of item I and X∩Y=ф.X is called the antecedent and Y is called the consequent of the rule.In general, a set of items such as the antecedent or consequent of a rule is called an Itemset.Each itemset has an associated measure of statistical significance called support.support(x)=s is the fraction of the transactions in the database containing X.The rule has a measure of strength called confidence defined as the ratio support(X Ụ Y) / support(X) (J.Han & M.Kamber, 2006).
Given a set of transactions T, the goal of association rule mining is to find all rules having support ≥ minsup threshold and confidence ≥ minconf threshold.
Mining Association rule is a Two-step approach: -Frequent Itemset Generation Generate all itemsets whose support minsup.-Rule Generation Generate high confidence rules from each frequent itemset, where each rule is a binary partitioning of a frequent itemset.
Apriori algorithm employs two actions join step and prune step as explained in the following algorithm to find frequent itemsets.
-Apriori principle: It states that if an itemset is frequent, then all of its subsets must also be frequent -Apriori principle holds due to the following property of the support measure: Support of an itemset never exceeds the support of its subsets
Rule Generation
O n c e t h e f r e q u e n t i t e m s e t s f r o m t r a n s a c t i o n s i n a d a t a b a s e D h a v e b e e n f o u n d , i t i s straightforward to generate strong association rules from them where strong association rules satisfy both minimum support and minimum confidence.This is calculated from the following equation Based on the above equation association rules can be generated as follows:
For each frequent itemset l, generate all non empty subsets of l.
For every nonempty subset s of l, output the rule " s -> (l-s)" if support_count(l) / support_count(s) is greater than or equal to min_conf, where min_conf is the minimum confidence threshold.
Tuberculosis association rules
Tuberculosis association rules can be generated by applying data mining ARM technique with the following steps: Pre-processing the dataset by discretizing and normalizing Generating rules by applying apriori on preprocessed range data
Pre-processing
Incomplete, noisy, and inconsistent data are common among real world databases.Hence it is necessary to preprocess such data before using it.The most common topics under data preprocessing are Data cleaning, Data integration, Data Transformation, Data reduction, Data discretization and automatic generation of concept hierarchies.
Discretization and Normalization are the two data transformation procedures that help in representing the data and their relationships precisely in a tabular format that makes the database easy to understand and operationally efficient.This also reduces data redundancy and enhances performance.
The above TB attributes are normalized and discretized to a suitable binary format.A categorical data field has a value selected from an available list of values.Such data items can be normalized by allocating a unique column number to each possible value.Numerical data fields are discretized by taking values that are within some range defined by minimum and maximum limits.In such cases we can divide the given range into a number of subranges and allocate a unique column number to each sub-range respectively.
Here we give a small example of five patients medical records with five attributes.Table 2 shows original data.may not be interesting to users, only few rules like explained above gives very good description and some hidden relationship may also be found.
We could see from the following output that left side (Antecedent) and right side (consequent) of the rule keep on interchanging repeatedly, which can be pruned by applying some conditions on both antecedent and consequent of a rule.
Associative classification
Association Rule Mining (ARM) as explained in section 3 is one of the most popular approaches in data mining and if used in the medical domain has a great potential to improve disease prediction.This results in large number of descriptive rules.Therefore ARM can be integrated within classification task to generate a single system called as Associative classification (AC) which is a better alternative for predictive analytics.
Classification based on association rules has been proved as very competitive (Liu.B et al., 1998).The general idea is to generate a set of association rules with a fixed consequent (involving the class attribute) and then use subsets of these rules to classify new examples.This approach has the advantage of searching a larger portion of the rule version space, since no search heuristics are employed, in contrast to Decision Tree and traditional classification rule induction.The extra search is done in a controlled manner enabled by the good computational behaviour of association rule discovery algorithms.
Another advantage is that the produced rich rule set can be used in a variety of ways without relearning, which can be used to improve the classification accuracy ( Jorge and Azevedo, 2005).
The procedure of associative classification rule mining as shown in figure 6 is not much different from that of general association rule mining.A typical associative classification system is constructed in two stages: 1) discovering all the event association rules (in which the frequency of occurrences is significant according to some tests); 2) generating classification rules from the association patterns to build a classifier.In the first stage, the learning target is to discover the association rules inherent in a database, but generating frequent itemsets may prove to be quite expensive.The number of rules generated from association rule discovery is quite large.Hence rule pruning is required.Moreover, to avoid the problem of overfitting, proper rule pruning method is to be employed.Ranking of the rules is also important.When a test instance has more than one potentially applicable rules, rule ranking is necessary to prefer one rule over the others.In the second stage, the task is to select a set of relevant association rules discovered to construct a classifier given the predicting attribute.
For example given a rule X -> Y, AC will only consider rules having a target class as the consequent.This means the new integration focuses on a subset of association rules, whose right hand-sides are restricted to the classification class attribute.This type of rule is called Class Association Rules (CARs).While normal association rule allows more than one condition as its consequent and any item from X can be the consequent, CARs generated in AC limit the consequent to one fixed target class for each rule and item from X are forbid to appear as the class label.In order to perform AC, a classifier will first mine CARs from a given transaction and later select the most predictive rule to perform a classifier (Chien and Chen, 2010).AC generates CARs depending on the frequent item generation technique in mining rules.Despite its benefit, AC does propose challenges in its classification performance.The most important thing is to the approach in mining appropriate CARs for the classification and its pruning technology since AC will generate large number of frequent item sets due to its pruning algorithm.Its prominent pitfalls are in its incapability of handling numerical data.Chen, 2010).Generally, AC consists of three main phases, which are rule generation, rule pruning, and classification (Do et al., 2009;Tang and Liao, 2007).The performance, however, might differ depending on the algorithm employed in any of these three phases.
CBA
The first AC algorithm was introduced by (Liu. B et al., 1998), namely CBA.The algorithm is based on the Apriori association rule algorithm in generating CARs.These rules are later pruned and only one most suitable rule will be used to classify the test set.Essentially, the CBA algorithm performs three tasks.First, it mines all CARs.Second, it produces a classifier from CARs, and finally, it mines normal association rules.
Generation of CARs
In CBA, the classification Association rules (CARs) are found iteratively in an apriori algorithm-like fashion.At first, frequent 1-rule itemsets are generated and are pruned.Using this iteratively, other frequent rule itemsets are also found.They are then pruned to get complete set of Classification association rules.
Building classifier (Ranking and Pruning Rules)
To prune the rules, CBA uses pessimistic error based pruning method in C4.5.The rule ranking is defined as below: Given two rules r i and r j , r i > r j (i.e., r i precedes r j or r i has higher precedence over r j ) if one of the following holds good: 1.The confidence of r i is greater than that of r j 2. Their confidences are the same but support of r i is greater than that of r j 3.Both the confidences and supports of r i and r j are the same, but r i is generated before r j After rule ranking, each training instance is covered by the rule having highest precedence among the rules that can cover the case.Every rule correctly classifies at least one training instance.The rules that do not cover any training instance are removed.The training instances that do not fall into any of the observed classes are added to a default class.
The multiple capabilities in CBA solve a number of problems in traditional classification systems.Since traditional classifiers only generate a small subset of rules that exists in data to form a classifier, the discovered rules may not be interesting.Also, to generate more rules we would need the classification system to load the entire database into the main memory.But because CBA generate all rules, the algorithm is more successful in finding interesting rules and the system also allows the data to reside on disk.However, in CBA, the rule generation process might degrade the accuracy of the classifier due to its randomness in selecting the most suitable rule to form the classifier model.CBA inherits Apriori multiple scan features that generates large number of rules, which is costly in terms of large computational time.
CMAR
CMAR is later introduced as the extension to CBA (Li et al., 2001).The CMAR algorithm implements FP-Growth algorithm instead of Apriori in generating its frequent itemset.
The Local Gain Threshold (LGT) is given by the formula: LGT = bestGain * GAIN_SIMILARITY_RATIO Where, GAIN_SIMILARITY_RATIO is a constant whose value is 0.99.
CPAR takes as input a (space separated) binary valued data set R and produces a set of CARs.The resulting classifier comprises a linked-list of rules ordered according to Laplace accuracy.CPAR also uses a dynamic programming approach to avoid repeated calculation in rule generation, which in turn is more economical.More importantly, CPAR selects best k rules in prediction.
Predictive accuracy and rules of associative classifiers
Difference between ARM and AC with reference to results is that the former generates only large number of descriptive rules whereas the latter generate fewer rules along with their performance measure thru accuracy.
CBA generates around 81 rules once it is pruned we get only two rules with an accuracy of 81.14%.When compared to both ARM and AC rules, it can be seen that AC rules are smaller and better in description and also CPAR provides better rules compared to all algorithms.
Summary
In this chapter two data mining techniques which help in the diagnosis of Tuberculosis have been discussed.Medical databases have accumulated large quantities of information about patients and their clinical conditions and digital era has provided the availability of these information in abundance.Data mining is a knowledge discovery process that helps in extracting relationships and patterns hidden in this data and can provide a new medical knowledge to doctors in their treatment procedure.
Association Rule Mining (ARM) is one of the most popular approaches in data mining and if used in the medical domain has a great potential to improve disease prediction.It shows doctor the hidden disease symptoms associated with one another.There are many algorithms associated with ARM and the most popular is Apiori.It works in two phasesfirst is frequent itemset generation where all the items in a database above some minimum specified threshold called support will be generated.Second one is rule generation which generates from the frequent sets, an association rule of the form X->Y based on some minimum confidence.We can say that whenever X appears there is a chance that Y also appears along with it with minimum confidence threshold.These concepts are applied on TB dataset which reveals important association between the symptoms.But this method results in large number of repetitive rules.
Associative classification (AC) is another data mining approach that integrates association rule mining and classification.It uses association rule mining algorithm, such as Apriori or Frequent pattern growth , to generate the complete set of association rules.Then it selects a small set of high quality rules and uses this rule set for prediction.This method results in smaller number rules compared to ARM.
Three important algorithms of AC such as CBA, CMAR and CPAR have been discussed in the chapter.Almost every algorithm contains two major data mining steps, an association rule (AR) mining stage-rules generated here are called as classification association rules (CARs) and a classification stage which uses the mined rules from the first stage directly.The second stage chooses rules with high priority from the CARs to cover training set.The difference between them is based on the priority evaluation of rules which usually depends on the confidence, support, rule length or common quality standard of classification rules.CPAR is better in rule generation compared to others.TB rules and accuracy are compared for every associative classification algorithm Though the entire rules may not help doctors, few rules may describe the relationship between one symptom with the other and also sometimes it can reveal hidden relationship.
Table
is known as the anti-monotone property of support Since the processing of the Apriori algorithm requires plenty of time, its computational efficiency is a very important issue.In order to improve the efficiency of Apriori, many researchers have proposed modified association rule-related algorithms.
Table 2 .
Table 3 contains schema of how the attributes are mapped to individual column numbers.Table 4 is the final translated or normalized data.Original (raw) Data www.intechopen.com
Table 3 .
Schema TableIn the above tables, note that Age is a numerical attribute and its cut off point is <25 & >=25.Similarly HIV is a categorical attribute where positive value is assigned one number and negative another.The value Null for categorical attribute weightloss is equivalent to No and is assigned a unique number.By using the schema table above we map each tuple in the original data of table 2 to a resulting normalized table shown in table 4. Resulting table has the same number of columns as the original table but filled with unique integer values.
|
v3-fos-license
|
2020-01-11T14:01:31.679Z
|
2020-01-10T00:00:00.000
|
210130621
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2019.01286/pdf",
"pdf_hash": "654aa00fe980c3188d76044c3634f697f3f292e9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:692",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "654aa00fe980c3188d76044c3634f697f3f292e9",
"year": 2019
}
|
pes2o/s2orc
|
Genetic Background Influences Acute Response to TBI in Kindling-Susceptible, Kindling-Resistant, and Outbred Rats
We hypothesized that the acute response to traumatic brain injury (TBI) shares mechanisms with brain plasticity in the kindling model. Utilizing two unique, complementary strains of inbred rats, selected to be either susceptible or resistant to seizure-induced plasticity evoked by kindling of the perforant path, we examined acute electrophysiological alterations and differences in brain-derived neurotrophic factor (BDNF) protein concentrations after a moderate-to-severe brain injury. At baseline, limited strain-dependent differences in acute electrophysiological activity were found, and no differences in BDNF. Following injury, pronounced strain-dependent differences in electrophysiologic activity were noted at 0.5 min. However, the divergence is transient, with diminished differences at 5 min after injury and no differences at 10 and 15 min after injury. Strain-specific differences in BDNF protein concentration were noted 4 h after injury. A simple risk score model generated by machine learning and based solely on post-injury electrophysiologic activity at the 0.5-min timepoint distinguished perforant path kindling susceptible (PPKS) rats from non-plasticity-susceptible strains. The findings demonstrate that genetic background which affects brain circuit plasticity also affects acute response to TBI. An improved understanding of the effect of genetic background on the cellular, molecular, and circuit plasticity mechanisms activated in response to TBI and their timecourse is key in developing much-needed novel therapeutic approaches.
INTRODUCTION Traumatic brain injury (TBI) is a major cause of death and disability, impacting all demographic groups. The mechanisms producing TBI are diverse, resulting in injuries ranging from mild to severe, with considerable variation in outcome. Prediction of sequela following TBI based on clinical presentation and imaging is challenging, as comparable injuries can have divergent outcomes, both at early and later stages. These observations suggest that other factors, such as genetic background, influence initial manifestations and secondary injury processes such as inflammation, lesion-induced plasticity and circuit repair, leading either to improvement or to delayed adverse consequences. Therefore, studying the role of genetic influences on the complex sequence of pathological and restorative processes that follow TBI may have important clinical implications.
Unsurprisingly, TBI acutely alters electrophysiologic activity. Human EEG studies obtained acutely after injury demonstrate primarily diffuse slowing (1)(2)(3)(4). Animal studies, most conducted prior to the advent of modern recording and analysis techniques, demonstrate complicated results, likely resulting from differences in experimental approaches including experimental animals, mechanism of injury, and anesthesia. The majority of these studies demonstrate slowing and reduced amplitude of cerebral activities (5)(6)(7)(8)(9), with potentially epileptiform activity noted under some conditions (10,11). Advances in recording capabilities, signal analysis, and improved methods for controlled and reproducible induction of experimental TBI offer an opportunity to advance understanding of acute changes in brain electrophysiologic activity after injury which, despite the importance of understanding brain injury at this early timepoint, has not been extensively explored.
Moderate-to-severe brain injuries involve direct mechanical damage with shearing forces, hemorrhage, excitotoxic necrosis, as well as more slowly evolving processes of plasticity which in a substantial subset of cases result in the delayed development of post-traumatic epilepsy (PTE) (12). Processes of circuit remodeling that increase susceptibility to seizures and include permanent structural and functional changes, such as the kindling model (13)(14)(15)(16), may be relevant to the brain's response to TBI. For example, the neurotrophin brain-derived neurotrophic factor (BDNF) and its receptor tropomyosin receptor kinase B (TrkB) are critical for the progressive circuit alterations in the kindling model (17)(18)(19) and these same pathways play an important role following TBI (20,21). Genetic differences have been demonstrated to be important in TBI, both in human (22)(23)(24) and animal studies (25), and many of these factors are also known to influence epilepsy. Therefore, genetic differences impacting plasticity in a model of epilepsy may be expected to impact response to TBI.
We hypothesized that the acute response to TBI shares mechanisms with brain plasticity in the kindling model, including involvement of BDNF. As the time course relevant for the development of TBI-related sequela such as PTE and cognitive deficits is unknown, we examined at the earliest time points for divergent responses to TBI in the inbred strains and outbred rats. In addition to the divergent responses to seizureinduction in the kindling model, these strains also demonstrate differences in behavior and learning paradigms which are known to change in brain injured animals (26)(27)(28)(29)(30). Therefore we examined acute electrophysiological alterations and BDNF expression after TBI in these unique, complementary strains as well as outbred SD rats.
Animals
We utilized novel strains of inbred Sprague-Dawley (SD) rats, selected for either increased rate (perforant path kindling susceptible, PPKS) or decreased rate (perforant path kindling resistant, PPKR) of perforant path kindling over the course of >15 generations (27). Additionally, out-bred SD rats, representing the parent strain, were acquired from a supplier (Envigo). Rats were 3-4 months of age at the time of surgery, and male and female rats were used in approximately equal numbers. Animals were maintained under 12 h light: 12 h dark cycles, with ad libitum food and water, in a vivarium under the care of the University of Wisconsin veterinarians. All animal handling and procedures were performed according to the NIH Guide for the Care and Use of the Laboratory Animals and the experiments were conducted under an approved protocol by the University of Wisconsin Institutional Animal Care and Use Committee.
Surgery
Prior to the procedure (Figure 1A), rats (PPKS n = 12, 7 males and 5 females; SD n = 8, 4 males and 4 females; PPKR n = 12, 8 males and 4 females) were weighed and anesthesia was induced with 5% isoflurane (Piramal) in 100% O 2 . The rat was placed into a stereotaxic frame with ear bars (Kopf Instruments) with bupivacaine (0.5%, SC, Fresenius Kabi USA, LLC) injected at contact points in the external auditory canals and along the midline of the scalp and with atropine (0.05 mg/kg IM, West Ward). Urethane (1.2 g/kg divided into three doses, IP, Sigma) was given immediately after induction with isoflurane, and isoflurane was weaned as tolerated, as assessed by tail flick in response to pinch and corneal reflex. Following the initial dosing, urethane-induced anesthesia persisted through the 4 h of this experiment. The scalp of the rat was shaved and prepared with topical betadine and alcohol along the midline. The skull was exposed and burr holes were drilled 1.5 mm anterior and 1.5 mm lateral (both left and right) to bregma, and a blind hole was drilled 1.5 mm posterior to lambda along the midline ( Figure 1B). Coated stainless steel wire (0.010" bare diameter, 0.0130" coated, A-M Systems) was placed into these burr holes (into the epidural space for the anterior holes and into a blind hole in the skull for the posterior hole) and secured with a screw. A circular craniectomy, ∼4 mm in diameter, was created over the right hemisphere, placed within the angle of the sagittal and lambdoid sutures ( Figure 1B).
Isoflurane was completely stopped at least 10 min prior to recording electrical activity from the left and right epidural electrodes. Electrophysiologic recordings were performed utilizing an XLTEK EEG acquisition system (Neuroworks, version 7.1.1) with an EEG32U amplifier (sampled at 1,024 Hz). Electrophysiologic activity was recorded for 5 min prior to delivery of a CCI and for 20 min following injury ( Figure 1A). CCI was delivered by Leica Impact One Stereotaxic Impactor (Leica), utilizing a 3 mm circular blunt impact tip with a velocity of 6 m/s and a dwell time of 500 ms ( Figure 1C). As the brains of rats in this study were microdissected, a representative chronic injury, as visualized by coronal CT images and a 3D reconstruction ( Figure 1D) is presented. The images are from a PPKS rat, 6 months after a CCI identical to the injury utilized in this study.
Four hours after CCI a subset of rats (PPKS n = 7, 5 males and 2 females; SD n = 5, 3 males and 2 females; PPKR n = 7, 4 males and 3 females) were euthanized by decapitation under deep isoflurane anesthesia. Following decapitation, the brain was rapidly dissected on ice to isolate posterior cortex (midline to rhinal sulcus, bilaterally), hippocampus (bilaterally), and cerebellum. Brain tissue was frozen in liquid nitrogen and stored at −80 • C. A set of control rats (n = 5, 3 males and 2 females, for each strain) from each strain were euthanized with isoflurane and decapitated without prior surgery or CCI.
Electrographic Analysis
The CCI was marked on the EEG recording in real-time and was confirmed by the electrical artifact of the impactor. A 60 s epoch of EEG ending at 0.5 min prior to CCI was selected for as a pre-injury baseline. Post-injury 60 s epochs beginning 0.5, 5, 10, and 15 min after CCI were selected for analysis ( Figure 1A). The EEG samples were exported as a text file and imported into Matlab (R2017b, Mathworks). Electrophysiologic activity was bandpass filtered, using an equiripple filter and retaining frequencies between 0.5 and 32 Hz, binned at 0.5 Hz intervals. Power spectral density functions, a measure of power at different frequencies, are generated using a short-time Fourier transform with a Hamming window of 512 points and an overlap of 128 points. The post-CCI power spectral density was normalized to the baseline total power for each rat. Spectral entropy, a measure of complexity of the signal, was calculated by H sp = − f h i=f l P i log P i where P is the power density, f i and f h are the lower (0.5 Hz) and upper (32 Hz) frequency limits, and power is normalized (31). Magnitude-squared coherence, a measure of the similarity between two signals, was calculated by C xy (f ) = |P xy | 2 P xx P yy where P xx and P yy are the power spectral densities of x and y, respectively, and P xy is the cross power spectral density of x and y, was calculated with a window of 512 points and an overlap of 128 points. Kurtosis, a measure of the frequency of outliers of a signal and often used as a measure of "sharpness" for electrographic activity, was calculated as the . Line length, often used as a measure of electrographic activity, was calculated by
BDNF Protein Concentration
Dissected brain tissue (cortex, hippocampus, cerebellum) was collected, stored at −80 • C, was thawed and homogenized by pestle in RIPA buffer (50 mM Tris-HCl, pH 7.5, 150 mM NaCl, 1% Triton X-100, 1% sodium deoxycholate, 0.1% SDS, 2 mM EDTA, Teknova) with protease inhibitor cocktail (104 mM AEBSF, 80 µM Aprotinin, 4 mM Bestatin, 1.4 mM E-64, 2 mM Leupeptin, and 1.5 mM Pepstatin A, Sigma). The homogenized tissue was left on ice for 15 min, and then centrifuged at 15,000 RCF for 15 min at 4 • C. The supernatant was retained and its protein quantitated by a BSA protein assay (Pierce, ThermoFisher). The samples were acid treated with addition of HCl to a pH of 2-3 for 15 min, then neutralized with NaOH. BDNF content was assayed by a sandwich enzyme-linked immunosorbent assay (ELISA) (BDNF Emax ImmunoAssay System, Promega), utilizing a monoclonal anti-BDNF antibody for plate coating, and a human polyclonal anti-BDNF antibody with an anti-IgY HRP conjugate for colorimetric detection. A BDNF protein standard curve, performed in duplicate, was included on all plates. All samples were assayed in triplicate and then averaged. BDNF protein was quantified relative to total protein.
Machine Learning Risk Score Model
A machine learning method, the Risk-Calibrated Supersparse Linear Integer Model (RiskSLIM) (32) uses optimization techniques to find the best logistic regression model, with bounded integer coefficients and a limited number of risk factors. The RiskSLIM method was utilized to generate a risk score for the rat belonging to the plasticity-susceptible strain (PPKS), as opposed to the non-plasticity-susceptible strains (SD or PPKR), based solely upon post-CCI electrographic parameters at 0.5 min after injury.
Experimental Design and Statistical Analysis
Selection of samples of EEG data and BDNF ELISAs were performed in a blinded fashion. All results are presented as mean ± SEM, including power spectrum and magnitudesquared coherence. Data were analyzed by JMP Pro 13 (SAS Institute, Inc). Comparisons of power spectral density and interhemispheric coherence, using frequency bins from 0.5 to 32.5 Hz by 0.5 Hz steps, were analyzed by a Least Squares Fit model, and testing model construct effects for strain (PPKS vs. SD vs. PPKR), side (ipsilateral vs. contralateral), and/or timepoint (baseline vs. 0.5, 5, 10, or 15 min post-CCI). Otherwise data were analyzed by ANOVA, and using Tukey's HSD test for post-hoc analysis with α = 0.05. The groups included for each ANOVA are those presented on the corresponding figure. No differences were noted between males and females for any of the groups or experiments, and therefore the sexes were combined.
Baseline
At baseline, no differences in the power spectrum of the electrophysiologic activity were noted between the right (ipsilateral to the subsequent CCI) and left (contralateral to the subsequent CCI) hemispheres for any of the strains. Bilaterally, baseline electrophysiologic activity of PPKR rats demonstrated greater power in the slower frequencies (0.5 to 2 Hz) and less power at intermediate and faster frequencies (4 to 32 Hz), as compared to the baseline of SD and PPKS rats (Figure 2A). Magnitude-squared coherence at baseline demonstrated decreased coherence between the left and right hemispheres in Frontiers in Neurology | www.frontiersin.org SD rats at intermediate frequencies (6 to 11.5 Hz), as compared to PPKS and PPKS rats ( Figure 2B). No differences were noted in either entropy (Figure 2C, Supplementary Table 1) or kurtosis ( Figure 2D, Supplementary Table 1) at baseline. Prior to CCI, no differences were found in either total power or line length (Supplementary Table 1).
Post-traumatic Changes in Electrophysiologic Activity
Immediate (0.5 min) Activity At 0.5 min following CCI, PPKS rats did not demonstrate significant changes from baseline in the power spectrum, neither ipsilateral (right hemisphere) nor contralateral (left hemisphere) to the CCI (Figure 3A). The electrophysiologic activity of SD rats demonstrated a broad reduction in power both ipsilateral and contralateral to the CCI, with decreased power seen at 3.5 to 31.5 Hz, with a trend toward greater reduction in power ipsilateral to the injury ( Figure 3B). PPKR rats also demonstrated bilateral reduction in power after CCI, albeit with statistically significant decreases limited to two narrow bands at 5.5 to 7 and 24.5 to 25.5 Hz (Figure 3C). Comparing across strains following CCI, broad reductions in the power of electrophysiologic activity were seen in SD and PPKR rats as compared to PPKS rats. Ipsilateral to the CCI, PPKS rats retained greater power at 5.5 to 6.5 and 9.5 to 32 Hz (Figure 3D), while contralateral to the CCI PPKS rats retained greater power at 4.5 to 31.5 Hz (Figure 3E). At the 0.5-min post-injury timepoint, all strains displayed a loss of interhemispheric coherence in intermediate frequencies, with PPKS rats demonstrating a loss of coherence at 3 to 7 Hz (Figure 4A), SD rats demonstrating a loss at 3 to 6 Hz (Figure 4B), and PPKR rats demonstrating a loss at 3.5 to 6.5 Hz (Figure 4C). Comparing among strains, following CCI significant differences were seen in interhemispheric coherence between 0.5 to 2 Hz which differentiated all three strains, with PPKS rats having the lowest coherence, SD rats having intermediate coherence, and PPKR rats having the greatest coherence ( Figure 4D). PPKS and SD rats demonstrated a decrease in entropy ipsilateral to the injury (Figure 7A, Supplementary Table 1 Table 1). No significant differences in entropy exist among post-CCI PPKS, post-CCI SD, baseline PPKR, and post-CCI PPKR rats ( Figure 7A). Following CCI, kurtosis increased in PPKR rats ipsilateral to the injury ( Figure 7B, Supplementary Table 1), though no change was Table 1). No differences in kurtosis were seen in PPKS or SD rats (Supplementary Table 1). Following CCI, no changes in total power or line length were found for PPKS, SD, or PPKR rats (Supplementary Table 1).
Early (5, 10, and 15 min) Activity
At 5 min following CCI, PPKS rats demonstrated greater power than PPKR rats at 6.5 to 7.5, 12.5 to 22, and 24 to 29.5 Hz ipsilateral to the injury (Figure 5A). SD rats did not demonstrate differences from the PPKS or PPKR rats ipsilateral to the injury at the 5-min timepoint. No inter-strain differences were noted at 5 min following CCI contralateral to the injury (Figure 5B), and no inter-strain differences were noted at 10 or 15 min following CCI, either ipsilateral or contralateral to the injury (Figures 5C-F). At the 5-, 10-, and 15-min timepoints no interstrain differences in interhemispheric coherence were noted (Figures 6A-C). At 5, 10, and 15 min following CCI, PPKS and SD rats demonstrated a significant decrease in entropy ipsilateral to the injury (Figure 7A, Supplementary Table 1), but not contralateral to the injury (Supplementary Table 1). PPKR rats did not demonstrate a decrease in entropy at 5, 10, or 15 min after injury, either ipsilateral ( Figure 7A, Supplementary Table 1) or contralateral to the injury (Supplementary Table 1). No statistically significant differences in kurtosis were noted at the 5-, 10-, or 15-min timepoints for the PPKS, SD, or PPKR strains, either ipsilateral or contralateral to the injury (Figure 7B, Supplementary Table 1). At 5, 10, and 15 min following CCI, no changes in total power or line length were found for PPKS, SD, or PPKR rats across time (Supplementary Table 1).
BDNF Protein
In uninjured rats, no differences in BDNF protein concentration were found among the strains in the ipsilateral cortex, contralateral cortex, ipsilateral hippocampus, contralateral hippocampus, or cerebellum (Supplementary Table 2). Comparing uninjured and injured rats, BDNF was greater in the cortex ipsilateral to the injury in SD and PPKR rats but no difference was seen in PPKS rats (Figure 8A, Supplementary Table 2). Similarly, BDNF was greater in the cortex contralateral to the injury in PPKR rats, but no difference was seen in PPKS or SD rats (Figure 8B, Supplementary Table 2). BDNF was greater in the hippocampus of injured PPKS rats than in uninjured PPKS rats, though no significant differences were noted in SD or PPKR rats ( Figure 8C, Supplementary Table 2). No significant differences in BDNF were seen in the contralateral hippocampus, comparing uninjured and post-CCI rats from the PPKS, SD, or PPKR strains (Figure 8D, Supplementary Table 2). No differences in cerebellar BDNF were seen between uninjured and injured rats in the PPKS or SD strains, though cerebellar BDNF was greater in injured PPKR rats than in uninjured PPKR rats (Figure 8E, Supplementary Table 2).
Risk Score Tool (RiskSLIM)
Rats were divided into two groups, either plasticity-susceptible rats (PPKS rats) or rats that are not plasticity-susceptible (SD and PPKR rats). Parameters of electrophysiologic activity recorded at 0.5 min after the CCI, including total (non-normalized) power; ipsilateral and contralateral percent band power in delta (0.5 to 4 Hz), theta (4.5 to 8 Hz), alpha (8.5 to 13 Hz), and beta (13.5 to 32.0 Hz); interhemispheric coherence in delta, theta, alpha, and beta bands; ipsilateral and contralateral entropy; ipsilateral and contralateral kurtosis; ipsilateral and contralateral line length, were collected. Dividing-point values for each parameter were identified with a partitioning approach based on the LogWorth statistic (JMP, SAS Institute Inc). The RiskSLIM method (32) was used, with a limit of 5 risk factors, integer coefficients of −1 to 1.
The resultant risk score tool incorporated one point for a magnitude-squared coherence in the delta band of <3, a beta band power of <3% over the contralateral hemisphere, and a kurtosis of <4 over the contralateral hemisphere (Supplementary Table 3). Using this tool, a score of 0 or 1 is associated with a 6.7% probability of the rat belonging to a plasticity-susceptible strain (PPKS), while a score of 2 is associated with a 75.0% probability and a score of 3 with an 88.9% probably of the rat belonging to a plasticity-susceptible strain (Supplementary Table 3).
DISCUSSION
We demonstrated that unique, complementary strains of inbred rats with a genetic background selected for susceptibility or resistance to kindling-induced plasticity exhibit distinct acute responses to moderate-to-severe TBI. Furthermore, the distinctive responses to TBI are brief, present at 0.5 min after injury, but are not seen at 5, 10, or 15 min after injury. Our findings reveal that important changes in electrophysiologic activity following brain injury. While these differences in electrophysiologic activity are transient, they are correlated with divergent patterns of BNDF protein expression, which is known to produce long-lasting and wide-ranging effects (33). Furthermore, these results provide an important foundation to explore later sequela of TBI in these unique strains.
These findings demonstrate the influence of genetic background affecting brain circuit plasticity on acute responses to TBI. The current experiments involve unique strains of inbred rats, selected for phenotype and therefore unbiased by expectations based on prior knowledge. Other investigations targeted at specific pathways have also demonstrated overlap between involving mechanisms of neuroplasticity and the response to TBI. In humans genetic polymorphisms in the BDNF gene are associated with differences in cognitive outcome after head trauma, both at early timepoints (1 month) (23) and at later timepoints (10-15 years) (24). Animal models of TBI have likewise demonstrated a connection between BNDF/TrkB signaling and TBI (21), including at times as early as 4 h after injury (20). Apolipoprotein E (ApoE) similarly has a role in neuroplasticity in normal physiology (34) and specific alleles of ApoE affect outcome after TBI (35)(36)(37). The genetic mechanisms associated with neuroplasticity in the kindling model and with response to TBI in the PPKS and PPKR strains is the subject of on-going investigations and has the potential to provide independent support for the role of BDNF/TrkB and other known mechanisms, as well as to identify unexpected or novel mechanisms.
The lack of extensive differences between the PPKS, PPKR, and SD strains at baseline is consistent with the selection method used for generating the inbred strains, which employed a response to a brain stimulus rather than a static trait. Therefore, the plasticity potential of the strains remains latent in the baseline state and few differences are noted. However, following injury more pronounced differences in electrophysiologic activity emerged between the strains at the 0.5-min timepoint, but these differences were much reduced at 5 min after injury and were no longer present at 10 or 15 min after the injury. Overall PPKR rats demonstrate a predominance of interhemispherically coherent slow frequencies and low signal complexity both at baseline and post-injury, which resembles the post-injury state of the PPKS and SD strains. Conversely, following injury PPKS rats have a power spectrum that resembles the uninjured, baseline state of SD rats. These findings are consistent with the hypothesis that the injury-induced electrophysiologic state in SD rats resembles the baseline state in PPKR rats, and that the injury-induced responses of outbred rats may not be fully present in PPKS rats.
In the uninjured state, examination of BDNF protein across the three strains demonstrated no significant differences. However, as with electrophysiologic activity, significant differences were noted when assessing the effect of injury. Injured PPKS rats demonstrated a large increase in hippocampal BDNF protein ipsilateral to the injury as compared to uninjured PPKS rats. Injured SD and PPKR rats demonstrated an increase in cortical BDNF as compared to uninjured controls which were not observed in the PPKS strain, with changes in cortical BDNF observed bilaterally in PPKR rats but only ipsilateral in SD rats. BDNF had been demonstrated to be involved in a plethora of brain processes, often with complicated anatomical and temporal patterns (38)(39)(40). Changes in BNDF have been described in multiple animal models of TBI (21,(41)(42)(43)(44)(45), and blood and CSF BDNF have been proposed as biomarkers for TBI (22,46,47), although the relationship between brain injury and BDNF appears complex. BDNF polymorphisms in humans are associated with differences in survival (48) and cognitive outcome after TBI (23,24). Furthermore, BDNF is involved in multiple processes relevant to sequela of TBI, including neuroprotection (49), epileptogenesis (50), memory and cognition (51), and mental health conditions such as depression and post-traumatic stress disorder (PTSD) (52). The current findings identify anatomical patterns of BDNF very early following injury which are dependent on genetic background and which, given the divergent outcomes after TBI of the inbred strains, may be correlated with clinically important sequela of TBI. These results will help to advance our understanding of the intricate role of BDNF and associated signaling following injury, and to guide further development of emerging BDNF-related treatments for brain injury (53).
Risk models play an important role in medicine (54), informing prognosis and guiding treatment decisions. Currently our ability to predict outcome after TBI in clinical situations is limited, with most tools focused on survival (55,56) or outcome at the level of the Glasgow Coma Scale (GCS) (57), rather than specific sequela, though some recent work suggests that EEG may be used to predict later PTE (58). We used a machine learning approach to generate a simple risk score model which distinguishes plasticity-susceptible rats (PPKS rats) from nonplasticity-susceptible strains (SD and PPKR rats) utilizing solely post-CCI electrophysiologic activity recorded at the 0.5-min timepoint. The ability to generate this model demonstrates the degree of divergence in the electrophysiologic response to TBI secondary to genetic background.
Several limitations regarding this study should be noted. As with essentially all animal models of TBI, the brain injury is produced under anesthesia and surgical conditions, neither of which are present in human TBI. Anesthesia likely has important effects on the brain injury (59) and on electrophysiologic activity. Urethane anesthesia was used in these experiments as it has a lesser impact on electrophysiologic activity than other agents (60). However, given the associated adverse effects of urethane, a survival surgery and subsequent follow-up to examine longterm outcome in these rats was not possible. Additionally, the electrophysiologic activity was recorded immediately after the injury, which would not be possible in clinical settings. This timepoint was chosen in an effort to identify the earliest point of divergence in response to injury among these strains, and this work succeeded in demonstrating the differences are apparent immediately (0.5 min) after injury, though the distinct patterns are no longer apparent at later timepoints in the early period (5, 10, or 15 min).
Our findings, including both measures of electrical brain activity and BDNF protein concentration, suggest a potential critical period for these conditions beginning immediately following injury. Future efforts will focus on the progression of these newly identified differences in the unique inbred strains beyond the acute timepoint and on direct correlation with sequela of TBI including PTE and cognitive and behavioral deficits. As electrophysiologic activity can be monitored non-invasively and relatively easily in humans, and as conditions such as PTE may be expected to have a robust signal in electrophysiologic activity, the ability to identify additional electrographic biomarkers in genetically-susceptible individuals is promising. Equally, molecules such as BDNF can be assayed in blood and CSF and may provide complementary prognostic information. An improved understanding of the cellular, molecular, and circuit plasticity mechanisms activated in response to TBI is key to developing much-needed novel therapeutic approaches. Given the large and growing burden of TBI, an improved understanding of the mechanisms leading to these conditions, including critical periods for their development and intervals during which disease-modifying intervention is possible, is vital for improved diagnosis and development of treatments.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher.
ETHICS STATEMENT
The animal study was reviewed and approved by University of Wisconsin Institutional Animal Care and Use Committee.
AUTHOR CONTRIBUTIONS
RK designed and conducted experiments and wrote the manuscript. PR and TS assisted with experimental design and manuscript revisions. Inbred rat strains (PPKS and PPKR rats) were generated by TS.
|
v3-fos-license
|
2022-12-17T05:07:19.961Z
|
2022-12-15T00:00:00.000
|
254733162
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "696b99befef8734d009fdc7ea4c5a55ab1eca514",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:695",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "696b99befef8734d009fdc7ea4c5a55ab1eca514",
"year": 2022
}
|
pes2o/s2orc
|
Indole-3-carbinol in vitro antiviral activity against SARS-Cov-2 virus and in vivo toxicity
The effects of indole-3-carbinol (I3C) compound have been described deeply as antitumor drug in multiple cancers. Herein, I3C compound was tested for toxicity and antiviral activity against SARS-CoV-2 infection. Antiviral activity was assessed in vitro in both in VeroE6 cell line and human Lung Organoids (hLORGs) where I3C exhibited a direct anti-SARS-CoV-2 replication activity with an antiviral effect and a modulation of the expression of genes implicated in innate immunity and inflammatory response was observed at 16.67 μM. Importantly, we further show the I3C is also effective against the SARS-CoV-2 Omicron variant. In mouse model, instead, we assessed possible toxicity effects of I3C through two different routes of administration: intragastrically (i.g.) and intraperitoneally (i.p.). The LD50 (lethal dose 50%) values in mice were estimated to be: 1410 and 1759 mg/kg i.g.; while estimated values for i.p. administration were: 444.5 mg/kg and 375 mg/kg in male and female mice, respectively. Below these values, I3C (in particular at 550 mg/kg for i.g. and 250 mg/kg for i.p.) induces neither death, nor abnormal toxic symptoms as well as no histopathological lesions of the tissues analysed. These tolerated doses are much higher than those already proven effective in pre-clinical cancer models and in vitro experiments. In conclusion, I3C exhibits a significant antiviral activity, and no toxicity effects were recorded for this compound at the indicated doses, characterizing it as a safe and potential antiviral compound. The results presented in this study could provide experimental pre-clinical data necessary for the start of human clinical trials with I3C for the treatment of SARS-CoV-2 and beyond.
INTRODUCTION
Indole-3-carbinol (I3C, Fig. 1) is an interesting compound that is much sought-after targets due to its broad biological properties since it is used as an antitumor agent [1].
I3C is a natural indole carbinol phytochemical derived by hydrolysis from glycobrassicin by plant or bacterial myrosinase produced in cruciferous vegetables of the Brassica genus such as cabbage, broccoli and Brussels sprouts, and is able to activate multiple antiproliferative cascades [2,3].
For decades, it has been widely explored regarding potential roles in several human cancer types [4,5] (i.e., melanoma, breast, prostate, lung, colon, leukaemia, and cervical cancer [6][7][8][9][10][11][12][13][14][15][16]) for its chemopreventive action on cancer in pre-clinical models and also for its promising effectiveness in clinical trials [17]. Many studies showed that I3C is involved in different cellular mechanisms, including transcriptional, enzymatic, metabolic and cell signalling processes. In particular, I3C can induce the suppression of cell cycle progression, block cancer cell migration, the promotion of apoptosis, and the inhibition of tumour growth [18][19][20]. Although I3C interacts with different pathways, the exact mechanisms by which it influences human cells have not yet been fully understood. It has been proposed that I3C may mediate its antiproliferative effects in cancer cells by directly interacting with different classes of target proteins with enzymatic activities. In particular, it has been demonstrated that the I3C and its synthetic derivatives are potent natural inhibitors of HECT family member of E3, suggesting the potential importance of I3C in developing highly potent and stable anti-cancer molecules [21].
In January 2020, COVID-19, an infectious disease caused by the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) rapidly became pandemic worldwide [31]. This pandemic is ongoing, and the global number of confirmed SARS-CoV-2 cases continues to rise. SARS-CoV-2 is a novel coronavirus belonging to the Beta-coronavirus genus [32]. Since the pandemic broke out, international research laboratories, together with pharmaceutical and biotech companies, are working in an unprecedented manner at extraordinary speed to find and evaluate drugs, vaccines and other solutions aimed at decreasing hospital admissions and helping heal patients and support recovery.
Nowadays, the epidemiological trend of SARS-CoV-2 does not allow to hypothesize a rapid disappearance of the disease and there are no data available on the possible protection spectrum and immunity time conferred by the vaccines currently being used or available and by those under development [33,34]. For these reasons, it is vitally important to have molecules that can reduce the infection burden and the severity of lesions in individuals with SARS-CoV-2 infection [35][36][37]. To identify new drug candidates, different approaches are being explored, starting with artificial intelligence and in vitro and in vivo studies, and aimed at identifying molecules capable of blocking the entry of the virus into cells, its intracellular replication, and cell egress. Several lines of research suggest that SARS-CoV-2 neutralizing antibodies (nAbs) that bind directly to the virus's spike glycoprotein and inhibit entry into host cells have therapeutic potential [38]. However, monoclonal antibodies currently approved for clinical use clinic either fail to neutralize the Omicron virus, with its mutated spike protein, or demonstrate significantly reduced neutralizing efficiency [39].
Recently, I3C displayed potent anti-SARS-CoV-2 effects [40]. We demonstrated that HECT proteins are involved in SARS-CoV-2 pathology, physically interacting with and ubiquitylating the SARS-Cov-2 spike protein. We showed that some members of the HECT family are expressed in greater quantities in the human cells of infected subjects and in the mouse models of COVID-19. Moreover, we found that several rare variants in the NEDD4 E3 ubiquitin protein ligase (NEDD4) and WW domain containing E3 ubiquitin protein ligase 1 (WWP1) genes are associated with severe cases of COVID-19, when compared to asymptomatic controls [40]. In addition, we proved that I3C is able to block SARS-CoV-2 viral egression in Vero E6 model through the inhibition of HECT proteins implicated in Covid-19 pathology [40]. The importance of E3 ligases in the ubiquitination of some viral proteins has recently been confirmed by combined multiomics studies which have allowed to demonstrate the ability of SARS-CoV-2 not only in the remodelling of the innate immunity, but also in promoting viral infection, by hijacking specific processes of ubiquitination [41]. All these data suggest the potential use of I3C as antiviral in clinical trials for patients with COVID-19.
Here, we assessed I3C pharmacological potential, investigating the toxicity and antiviral effect of I3C in in vitro and in vivo models.
I3C antiviral activity in vitro
We evaluated at first the impact of I3C on the antiviral activity during SARS-CoV-2 infection in Vero E6 cells regardless the time of treatment. We treated the cells with I3C using a 3-fold decreasing concentration scale ranging between 50 and 0.069 μM based on using three different treatment protocols: (i) 1 h before SARS-CoV-2 infection (pre-treatment); (ii) concomitantly with the infection (co-treatment) and (iii) 1 h after infection (post-treatment). The drug was then added at two-time points after (24 and 48 h) SARS-Cov-2 infection (MOI = 0.001). Antiviral activity was evaluated 72 h post-infection, when the viral-induced cytopathic effects (CPE) were evaluated. We observed that the pre-treatment protocol with I3C significantly reduced the SARS-CoV-2-induced CPE both at 50 [whose concentration we know to be partially toxic to cells [40]] and 16.67 μM, when compared to DMSO-treated cells (*p < 0.05). The reduction was lost at lower I3C concentrations ( Fig. 2A).
On the other hand, in I3C co-and post-treatment protocols, we observe a statistically significant reduction only at 50 μM (*p < 0.05), while no significant differences were observed in the other I3C concentrations, when compared to DMSO-treated cells (Fig. 2B, C, respectively). Moreover, to see the direct effect on cells, we also evaluated the effect of I3C at the used concentrations on the cell viability without any infection, but we did not observe any different between I3C treatment and DMSO control in terms of cell viability (Fig. 2D).
Overall, these data demonstrated that the I3C pre-treatment protocol exerts a direct anti-SARS-CoV-2 activity, and it appears as the best treatment schedule compared to co-and post-treatment protocols, because we observed an antiviral effect at 16.67 μM. In particular, this data highlight that I3C has an antiviral effect also when the cells are already SARS-CoV-2-infected, as measured by CPE-inhibition assay. This is an important finding, further suggesting I3C as a possible repurposing molecule candidate for Covid-19 therapy.
To investigate the effects of I3C pre-treatment protocol on innate immune response and support its efficacy against VSVpp.SARS-2-S, we evaluated the immunity-related genes expression in infected human Lung Organoids (hLORGs) 72 h after I3C treatment at 16.7 µM. First, following the infection and treatment, we went to count the number of organoids that form after 72 h of treatment. In organoids only untreated (Ctrl), infected (S+), and treated and infected with I3C [16.7 μM], we obtained about the same number of hLORGs formed (Fig. 2E), to demonstrate the fact that if I3C was toxic it would not allow the single cells that make-up organoids to self-assembly. Secondly, as a reported in [42], we confirmed, by RT-qPCR, that pseudo-SARS-CoV-2 induced type I (IFNβ) and III (IFNγ1) IFNs expression, the first protection against viral infections, as well as the expression of interferon-stimulated genes (IFIT1, TRIM22, and MX2), which counteract viral replication, transcription, and translation in infected and uninfected cells and stimulate the adaptive immune response. In addition, we observed that mRNAs levels of proinflammatory chemokines and cytokines (CXCL10, IL-6 and TNF-α) were significantly upregulated in pseudo-SARS-CoV-2-infected hLORGs (***p < 0.001) (Fig. 2F).
After I3C pre-treatment protocols, we observed a significant decrease in mRNA levels of type I and III IFNs in infected hLORGs treated with I3C at a concentration of 16.7 µM compared to infected (S+) hLORGs (*p < 0.05). The hLORGs treated with I3C also showed a significant downregulation of interferon-stimulated genes (TRIM22, IFIT1 and MX2) and of pro-inflammatory chemokines and cytokines (IL-6, TNF-α and CXCL10) (*p < 0.05). This downregulation is evident for all genes in hLORGs after 16.7 µM of I3C treatment (Fig. 2F).
We also evaluated I3C pre-treatment protocol at 16.7 µM in infected VeroE6 cell line with Pseudotype Lentivirus SARS-2-Omicron. The antiviral effect of I3C was evaluated by a luciferase assay, which quantifies the ability of virus to infect VeroE6 cells. As observed in Fig. 2G, the capacity of pseudovirus Omicron to infect target cells was strongly inhibited. Specifically, the infection rate of cells treated with I3C was 18% in this pseudotype lineage tested (**p < 0.01) (Fig. 2G). Based on data from the luciferase assay, it can be concluded that the I3C pre-treatment protocol is also able to inhibit the pseudotype lentivirus SARS-2-Omicron infection, demonstrating that pre-treatment protocol is the best treatment schedule with the new emerging SARS-CoV-2 variant.
Preclinical toxicity in a mouse model Lethal dose (LD50) and toxicity grade of I3C in mice. To determine the short-term adverse effects of I3C on main target organs when administered in a single dose after intraperitoneal (i.p.) and intragastric (i.g.) administration route in mice, we firstly estimate a Lethal Dose (LD50) of I3C in i.p. and i.g. administration and its toxicity grade both for male and female mice using AOT425-StatPgm Software based on maximum likelihood with 95% PL Confidence interval. From this analysis, we obtained the following estimated LD50 values for i.g. administration: 1410 mg/kg (95% PL confidence interval is 861.3 to 1680 mg/kg) and 1759 mg/kg (95% PL Confidence interval is 0 to great than 2000 mg/kg) for male and female mice, respectively ( Table 1). The estimated LD50 values for i.p. administration were: 444.5 mg/kg (95% PL Confidence interval is 355.3 to 543) and 375 mg/kg (95% PL Confidence interval is 337.8 to 478) for male and female mice, respectively (Table 1). were evaluated 72 h post-treatment by setting the uninfected and untreated control cells as 100% and the remaining values represented as a relative value. Data are expressed as mean ± SD (n = 9) of three independent experiments performed in triplicated. Results were analysed using Graph Pad Software (GraphPad Prism 9). Statistically significant differences between DMSO and I3C are represented as *p < 0.05 determined using the Wilcoxon test. E Percentage of hLORGs tot reformed after 72 h post infection (hpi) with VSVpp.SARS-2-S D614G variants. n = 25-30 fields from three biological replicates and two different observers. F Expression of immunity-related genes in hLORGs 72 h posttreatment, with (S+) and without (Ctrl) spike protein of VSVpp.SARS-2-S and following treatment with I3C at 16.7 µM. Bar graph shows expression of immune response genes and cytokines, quantified by qRT-PCR at 72 h post-treatment (n = 6). *p < 0.05, **p < 0.01 and ***p < 0.001 by one-way ANOVA test. G Transduction efficiency was quantified by measuring virus-encoded luciferase activity in cell infected after 72 h post I3C treatment with Pseudo type Lentivirus SARS-2-Omicron. Data are expressed as the percentage of infection and the average data (n = 6) from three biological replicates and reported as mean ± standard error of the mean (SEM). Each biological replicate has been performed in duplicate. **p < 0.01 by Student's t-test.
Death rate. All mice (male and female) were divided in two groups (i.g. vs i.p.) and treated with different doses of I3C: 2000, 1750, 1500, 1000 and 550 mg/kg in a single i.g. administration and 1000, 550, 375 and 250 mg/kg in a single i.p. administration. After I3C dosing, they were then observed individually and periodically for the first 24 h, with particular attention in the first 4 h, and daily thereafter, for a total of 14 days.
As concerned I3C i.g. administration, we observed that single administration of doses higher than 1000 mg/kg to mice determined relevant mortality rate both in male and female mice (Fig. 3A). In particular, all mice (male and female) died within 4 h from the administration of 2000 mg/kg i.g. (data not included in the graph).
Similarly, in I3C i.p. administration, we observed that single-dose administration higher than 375 mg/kg determined a relevant mortality rate, both in male and female mice. All male mice died within 4 h of administration of 1000 mg/kg intraperitoneally while all female mice died within 4 h from the administration of 550 mg/ kg intraperitoneally (Fig. 3B).
Body weights. All mice were also observed individually for their body weight after I3C i.g. and i.p. dosing at least once during the first 30 min, periodically for the first 24 h, with particular attention in the first 4 h, and daily thereafter, for a total of 14 days. We observed that in all mice (male and female) after I3C i.g. administration at different doses, the body weight growth showed a tendency to decrease and then raised slowly when a single dose higher than 1500 mg/kg was administrated (Fig. 4A). Mice treated with 2000 mg/kg are not included in the graph because they died after 4 h, as mentioned in the previous paragraph.
Similarly, when I3C was administrated at different doses i.p., we observed that the body weight growth of mice showed a tendency to decrease and then raised slowly (both in males and females) when a single dose higher than 375 mg/kg was administrated (Fig. 4B).
Toxicological results. Abnormal toxic symptoms were also evaluated after I3C administration. At doses greater than 1000 mg/kg (for i.g. administration) and 375 mg/kg (for i.p. administration) were observed piloerection and dorsal hair dull phenomenon (contraction of arrector pili muscle) appeared in mice (before death) after I3C administration, and spontaneous activity decreased compared to control mice (Fig. 5A). Moreover, mice (some dead mice) developed a phenomenon of conjunctival opacification, iritis, and conjunctivitis with reduced spontaneous activity after I3C administration (i.g. ≥1000 mg/kg and i.p. ≥375 mg/kg) compared to control mice (Fig. 5B). The pain score index of experimental animals showed an increased statistically significant after I3C administration (i.g. ≥1000 mg/kg and i.p. ≥375 mg/kg) compared to control mice (***p < 0.001) (Fig. 5C) Histopathology. The organs of the mice whose survival time is more than or equal to 24 h after exposure, the histopathological examination (heart, liver, spleen, lung and kidney) was carried out in order to obtain relevant toxic information.
The histopathological examination of heart, liver, spleen, lung and kidney after I3C i.g. administration reveals that there were no I3C effects on mice given 2000 mg/kg compared to control tissue. There is only congestion in the liver of the mice in the I3C 2000 mg/kg group. The congestion is localized on the central vein. The central vein and the hepatic sinusoids around it were obviously dilated and filled with red blood cells. Some congestion areas are connected to form a blood stasis zone. The hepatocytes in the central area of the lobule atrophied and disappeared, resulting in sparse, scattered and disordered hepatocytes (Fig. 6A).
As concerned I3C i.p. administration, there were no I3C effects in all mice tissues analysed given 550 mg/kg compared to control group. As for i.g. administration, we observed only congestion in the liver of the mice in the I3C 550 mg/kg group compared to control group. The congestion is localized on the central vein. The central vein and the hepatic sinusoids around it were obviously dilated and filled with red blood cells. Some congestion areas are connected to form a blood stasis zone. The hepatocytes in the central area of the lobule atrophied and disappeared, resulting in sparse, scattered and disordered hepatocytes (Fig. 6B). The number of congestions containing red blood cells in each section after both I3C i.g. and i.p. administration was counted. We observed an increased statistically significant of number of congestions containing red blood in both I3C i.g. and i.p. administration (***p < 0.001) (Fig. 6C, D).
DISCUSSION
Viruses cause a wide spectrum of clinical illnesses, most of which are acute respiratory infections. In most cases, the symptoms of acute respiratory infection are similar for different viruses, though the severity may be variable [43]. Respiratory viral infections represent a significant threat to human health worldwide. SARS-CoV-2 is responsible for the ongoing worldwide pandemic which has already taken more than six million lives [33]. Antiviral drugs are being studied, aimed at inhibiting the replication of the virus; immunomodulatory drugs to attenuate the overactive immune system [44,45]; neutralizing antibodies to inhibit the virus and help the immune system to clear the infection [38,46,47]. To date, about 156 vaccines have been designed and more than 120 clinical trials were underway at that time [48,49]. However, the duration of protection of the vaccines available today decreases within 3-6 months as evidenced by the rates of breakthrough infections caused by new variants of the virus [34]. Therefore, it urgently needs to have new compounds for its prevention and effective treatment.
Recently, we demonstrated that indole-3-carbinol (I3C), a natural HECT family member of E3 ligases inhibitor, displays potent anti-SARS-CoV-2 effects and inhibits viral egression and can be potentially used as an antiviral drug [40]. So, in this study, we extent the in vitro antiviral activity of I3C against SARS-CoV-2 in Vero E6 cell line. We substantiated that I3C significantly reduce the SARS-CoV-2-induced CPE at 16.67 μM, when compared to DMSOtreated cells and provide evidence that I3C treatment has still antiviral effects when provided at the time of infection or after infection. Specifically, we present novel data demonstrating that I3C displays an antiviral effect also when the cells are already SARS-CoV-2-infected (in particular see the post-treatment protocol; Fig. 2C), as measured by CPE-inhibition assay; while we previously demonstrated that a reduced CPE corresponds to a decreased viral production using either I3C [40] or other compounds [45,50]. This is an important finding which further supports the notion that I3C is an exciting repurposing molecule candidate for Covid-19 therapy. SARS-CoV-2 similarly to other RNA viruses causes the immune system to attack its tissues in the form of an autoimmune and autoinflammatory process with an exaggerated release of proinflammatory cytokines and type I interferon (IFN). Therefore, to evaluate the efficacy of I3C as an antiviral, we analysed the cytokine response after infection with SARS-CoV-2 pseudovirus and treatment with I3C on a three-dimensional model of human lung organoids (hLORG) obtained from pluripotent stem cells (iPSC) [42]. We observed a significant reduction in the mRNA levels of IFN-related genes in infected hLORGs treated with I3C at a concentration of 16.7 µM compared to infected (S+) hLORGs (*p < 0.05).
We expanded our analysis to test the possible I3C efficacy to inhibit the very prevalent SARS-CoV-2 Omicron variant. I3C proved to also be effective towards this variant, which is of great relevance in view of the global impact of this variant on human health. These data demonstrate the usefulness of I3C in reducing the efficiency of in vitro infection and provide further evidence that host-targeted antiviral therapy may be beneficial to counter viral resistance and develop broad-spectrum antivirals.
Because I3C is a bioactive compound derived as a compound from Brassicaceae, it is perceived as safe, thus increasing interest in its use to prevent several diseases, including Covid-19. However, the effective concentrations of I3C at clinical and preclinical levels and the results associated with the consumption of large amounts remain unclear and warrant further study. For this reason, we evaluated potential adverse in vivo effects in a unique and very comprehensive manner in the mouse as never done before. We evaluated the toxicity through different routes of administration to understand which were the target organs of possible toxicities and assess possible diverse adverse effects due to different administration routes. Indeed, we observed that the liver is the target organ for both routes of administration (organ still never described in the literature for the toxic effect of I3C at high doses). In addition, for the first time we used both male and female mice to carry out toxicity studies. This is because, in females, the effect of I3C may be subject to hormonal changes in the metabolism of I and II phases. In fact, this analysis showed a different tolerability dose in males and females to be considered for any future clinical trials in which I3C will be used as an antiviral compound.
Specifically, we employed the mouse in order to evaluate possible toxicity effects of I3C through two different routes of administration: intragastrically (i.g.) and intraperitoneally (i.p.). The estimated LD50 (lethal dose 50%) values in mice were: 1410 and 1759 mg/kg i.g.; while LD50 values for i.p. administration was: 444.5 mg/kg and 375 mg/kg to male and female mice, respectively. We also established that above 1000 mg/kg (i.g.) and 375 mg/kg (i.p.), the toxic effects are characterized by piloerection, dorsal hair dull phenomenon, conjunctival opacification, iritis, and conjunctivitis with reduced spontaneous activity after I3C administration. Moreover, from histopathological examination, we only observed congestion areas in the liver connected to the blood stasis zone, while spleen, kidney, lung and heart did not display gross pathological change after I3C i.g. administration at 2000 mg/ Fig. 5 Abnormal toxic symptoms after I3C administration. A Piloerection and dorsal hair dull phenomenon (contraction of arrector pili muscle) after I3C administration (n = 6) (i.g. ≥ 1000 mg/kg and i.p. ≥ 375 mg/kg) compared to control mice (n = 6). B Conjunctival opacification, iritis, and conjunctivitis with reduced spontaneous activity after I3C administration (n = 6) (i.g. ≥ 1000 mg/kg and i.p. ≥ 375 mg/ kg) compared to control mice (n = 6). C The pain score index of experimental animals increased significantly after I3C administration (Scoring standard of pain index for experimental animals: 0: Normal hair and activity of the experimental animals; 1: part of the hair of the experimental animals was erect and temporarily arched back; 2: the fur of the experimental animals was obviously rough and intermittent arched back; 3: the fur was obviously rough, accompanied by other symptoms such as arched back, slow reaction and behaviour, and even death). Data are expressed as mean ± SD. kg (i.g.) and 550 mg/kg (i.p.). As reported in [51], the liver represents a fairly large fraction of body weight and appears to be an important reservoir of I3C and related compounds. Below these values, I3C (in particular at 550 mg/kg for i.g. and 250 mg/kg for i.p.) induces neither death nor abnormal toxic symptoms, as well as no histopathological lesions of the tissues, analysed.
In a study conducted by Fletcher et al. [52], it was observed that the athymic mouse model received diets supplemented with 0-100 μmol I3C/g diet for 4 weeks. They found that mice supplemented with I3C were not viable after three days on a 100 μmol I3C/g supplemented diet. On the other hand, mice fed with 10-50 μmol I3C/g supplemented diet survived but showed concentration-dependent adverse effects. Noteworthy, they found intestinal damage occurred in mice that received I3C supplementation as low as 10 μmol/g diet. Therefore, the intestine appeared to be the target of I3C toxicity. Moreover, I3C has been seen to significantly alter the number and width of intestinal villi, which is associated with a dose-dependent reduction in cell proliferation and an increase in apoptosis. Other molecular effects observed for I3C include activation of multiple xenobiotic metabolism pathways. Moreover, this study revealed that the total amount of I3C consumed by an animal per day (5 g at 100 μmol/g) equals to a Fig. 6 Histopathological examination after I3C administration. A Spleen, kidney, lung and heart did not show gross pathological change after I3C i.g. administration at 2000 mg/kg. Only in the liver (red square), we observed congestion areas connected to the blood stasis zone. B Spleen, kidney, lung and heart showed no gross pathological change after I3C i.p. administration at 550 mg/kg. We only observed congestion areas in the liver (red square) connected to the blood stasis zone. Congestion was stained with H&E, Black arrows indicate congestion and containing red blood cells. C The number of congestions containing red blood cells in each section after I3C i.g. administration was counted. Results are presented as the average number of congestions per mm 2 as mean ± SD (n = 25-30 fields) from three biological replicates and two different observers. D The number of congestions containing red blood cells in each section after I3C i.p. administration was counted. Results are presented as the average number of congestions per mm 2 as mean ± SD (n = 25-30 fields) from three biological replicates and two different observers. Statistical significance was determined by one-way analysis of variance, ***p < 0.001. ~75 mg tablet. One commercially available supplement tablet normally came in the form of 200 mg I3C. This is a concentration comparable to or lower than orally administered doses in humans [53,54].
In a clinical study, healthy women subjects were given orally up to 1200 mg I3C, and 5 out of 20 subjects reported mild gastrointestinal distress, nausea and vomiting after ingesting a single dose of ≥600 mg I3C [53]. These subjects spontaneously recovered upon discontinuing the supplement with no long-term effects [53]. Few I3C studies reported adverse reactions ranging from skin rash, and a slight increase in gastrointestinal motility, to mild bowel upset [55], indicating the potential safety concern of I3C supplementation.
Similarly, Wong and colleagues [56] enrolled 60 women in a placebo-controlled, double-blind dose-ranging chemoprevention study of indole-3-carbinol (I3C). Each woman took a placebo capsule or an I3C capsule daily for a total of 4 weeks. Participants were given different doses of I3C, between 50 (low-dose) and 400 mg (high-dose). Except for those with a prior history of elevated alanine aminotransferase, none of the participants experienced toxic effects.
In our in vivo experiment, we also observed a different response in male and female mice when treated with different dosages of I3C. These differences can be explained by the fact that I3C is known to exert an anti-estrogenic effect, and female animals may respond to I3C differently due to the interaction between estrogen receptor, and the phase I and phase II metabolism [57][58][59].
On the basis of the data presented here and what is reported in the literature referring to toxicity studies on I3C and, importantly, keeping into account that a 16.67 μM concentration is equivalent to a 2.5 mg/kg dosing in vivo, we are led to conclude that the concentrations at which the antiviral effects were observed in vitro would be well tolerated in vivo. In conclusion, I3C exhibits a significant antiviral activity, and no toxic effects were recorded for this compound at the indicated doses, characterizing it as a safe and potentially antiviral compound.
Understanding the molecular interactions that modulate the output and egress of viral particles also offers an important opportunity to identify novel host targets for the development of antivirals to prevent and treat infections with coronaviruses and other emerging respiratory viruses [60]. Indeed, host-targeted antiviral therapy may be advantageous to counteract viral resistance and to develop broad-spectrum antivirals. The enzymatic activity of the HECT-E3 ligases has been implicated in the cell egress phase of some RNA viruses possibly hijacking the Endosomal Sorting Complexes Required for Transport (ESCRT) machinery and can therefore constitute a valid target for new classes of antiviral drugs. Interestingly, using a novel bioinformatic approach we have obtained evidence of probable interactions between Nsp15-NendoU endoribonuclease of the SARS-CoV-2 virus with both WWP1 and NEDD4, both physically and functionally (Novelli G et al., unpublished). NendoU activity of Nsp15 is responsible for the protein ability to interfere with interference with the innate immune response [61]. Nsp15 degrades viral RNA to hide it from the host defences [62]. Nsp7b also interacts with NEDD4L with evidence (level = 2) (Novelli G et al., unpublished) [63]. NSP12 is a component of coronaviral replication and transcription machinery, and it appears to be a primary target for the antiviral drug remdesivir [47].
Overall, this study demonstrated for the first time that I3C has an anti-SARS-CoV-2 effect independently of the time of treatment in respect to the time of viral infection. I3C also proves effective against the SARS-CoV-2 Omicron variant. Moreover, we provided evidence of the toxicology effects of this compound in animal models setting and although further studies will be needed to assess the antiviral activity of I3C on in vivo model, it appears to be a promising candidate for its use in human clinical trials. On the whole, I3C is a promising antiviral candidate for the treatment of SARS-CoV-2 infection.
Chemical treatment
Indole-3-carbinol (I3C) was obtained from Sigma-Aldrich (Product Number: 17256, CAS-No.: 700-06-1). For in vitro assays, I3C was dissolved in 100% DMSO and added to the cell culture medium at different concentrations. For in vivo assay, the corresponding dose of I3C suspension is prepared using 10% DMSO, and the solvent was 0.9% sodium chloride solution.
I3C antiviral test
The antiviral activity of I3C was tested by the SARS-CoV-2-induced cytopathic effect (CPE) inhibition assay using Vero E6 cells infected with the SARS-CoV-2 strain isolated at INMI L. Spallanzani IRCCS (2019-nCoV/ Italy-INMI1; GenBank MT06615656) as reported [40]. Briefly, cell monolayers growing in 96-well plates (3 × 10 4 cells/well) were treated with different doses of either I3C or DMSO according to three different protocols: (i) pre-treated for 1 h before infection (pre-treatment); (ii) at the same time of infection (co-treatment); 1 h after infection (pre-treatment). DMSO was used as control since I3C is solubilized in this compound. Cells were infected at 0.001 multiplicity of infection (MOI; which reflects the ratio of PFU to the number of infected cells), using MEM supplemented with heat-inactivated 2% FBS and 2 mM L-glutamine. In the following 72 h, cells were treated by adding the compound/control to the culture medium every 24 h and maintained at 37°C with 5% CO 2 . At 72 h post-infection, supernatants were discarded and 100 µL of crystal violet solution (Merck Life Science, Milan, Italy; Cat. No. 9448-2.5L-F) containing 2% formaldehyde (Carlo Erba reagents, Milan, Italy; Cat. No. 415666) were added to each well for 20 min. Subsequently, the fixing solution was removed, plates washed with tap water and then immersed in a bath of 2% formaldehyde solution in PBS for further 20 min. Finally, cell viability was evaluated with a photometer measuring the optical density (OD) at 595 nm and reported as the percentage of surviving cells compared to the uninfected cells. Results are expressed as the mean ± SD. Statistical analysis of data was performed using Wilcoxon test and analysed using Graph Pad (GraphPad Prism 9).
Pseudotypes SARS-2-S infection and I3C treatment in hLORGs
For treatments with I3C, the hLORGs [42] were incubated for a minimum of 1 h with the I3C at 16.7 µM and then the VSVpp.SARS-2-S virus (D614G) was added to the organoids (previously disrupted into small clumps) and left to act for 4 h at 37°C. hLORGs are then incorporated in drops of Matrigel GFR at a cell density of~1200-1600 cells/μL, using CK + DCI media with Y-27832 and left to grow for 72 h and then analysed (as a reported in [42]). In particular, I3C was added at different time points as described in [40] and specifically 1 h before and 4, 24 and 48 h after pseudovirus infection. The infection with the SARS-CoV-2 pseudovirus containing eGFP and gene expression analysis were evaluated at 72 h post-treatment. Briefly, Trizol Reagent (Invitrogen Life Technologies Corporation, Carlsbad, CA, USA) was used to extract total RNA from cells, according to the manufacturer's instructions. Total RNA samples were treated with DNase I-RNase-free (Ambion, Life Technologies Corporation, Foster City, CA, USA) to remove genomic DNA contamination. One µg of RNA was reverse transcribed and used in RT-qPCR using the Life Technologies Corporation's High-Capacity cDNA Archive kit (Foster City, CA, USA). SYBR Green was used to assay mRNAs (Life Technologies Corporation, Foster City, CA, USA). As reference genes, 5.8S and GAPDH were employed. Primer sequences will be given upon request. The comparative DDCt methods were used to quantify relative gene expression levels.
The pseudovirus used in this experiment was kindly gifted by Hoffmann lab, German Primate Center-Leibniz Institute for Primate Research, Gottingen, Germany. Briefly, the Vesicular Stomatitis Virus (VSV) pseudovirus system was employed to produce the SARS-CoV-2 pseudovirus, which F. Centofanti et al.
displays the SARS-CoV-2 spike protein (S) on the VSV particle surface. The replication-deficient VSV vector that lacks the genetic information for VSV-G and instead codes for two reporter proteins, enhanced green florescent protein (GFP) and firefly luciferase (Fluc) (VSV*ΔG-Fluc) to generate SARS-CoV-2 S-pseudotyped particles, that accurately mimics key aspects of SARS-CoV-2 entry into cells. The efficiency of virus entry was first observed by evaluating GFP fluorescence (data not shown) and after was specifically quantified by performing luciferase assay. The VSV pseudo virus replicates in 16 h and without its original G is restricted to a single round of replication.
I3C treatment against Omicron variant
Vero E6 cell monolayers growing in 96-well plates (3 × 10 4 cells/well) were treated for 1 h with 16.7 µM of I3C before Pseudotype Lentivirus SARS-2-Omicron (ReVacc Scientific) infection. This pseudotype virus uses recombinant lentivirus to carry spike protein of SARS-CoV-2 (GenBank: MN908947) with multiple mutations initially identified in variant of Omicron (B.1.1.529) (BA.1). As pseudo lentivirus infectivity without its original envelope glycoproteins G is restricted to a single round of replication. Cell infection can be monitored by luciferase activity. DMSO was used as uninfected control since I3C is solubilized in this compound. Cells were infected at 250 ffu/well using MEM supplemented with heat inactivated 2% FBS and 1% L-glutamine in the presence of I3C and DMSO treatments. After 1 h of incubation, the viral input was replaced by fresh medium containing either I3C or DMSO. Cells were then treated with either I3C or DMSO every 24 h and incubated at 37°C with 5% CO 2 for 60-72 h, when percentage of infected cell was measured by luciferase assay.
Animal 50 male and 50 female BALB/c mice (4-6 weeks of age; body weight of 18-22 g) were obtained from Beijing Charles River Experimental Animal Center. All mice were kept in SPF-animal facility (Laboratory Animal Usage License Number of Testing Facility: SYXK (SU) 2020-0028). The mice are individually housed in cages elevated off the floor supplied by contract vendor. These cages conform to standards set forth by the US Animal Welfare Act. The cage size complies with the recommendations set by the Guide for the Care and Use of Laboratory Animals. Each mouse, at the commencement of its dosing, was aged between 4 and 6 weeks old and its weight decreased in an interval within ±20% of the mean weight of any previously dosed animals.
Assessment of acute preclinical toxicity in mice
Based on the practice guide (http://www.fda.gov/cder/guidance/ index.htm) for the dose conversion between animals and human and OECD Test Guideline 425 (Acute Oral Toxicity: Up-and-Down Procedure), 50 mg/kg of I3C was considered the optimal starting dosage for acute toxicity test of I3C in mice. The dosing range of I3C was considered from 50 to 2000 mg/kg (The default dose level is 3.2). The test substance was administered in a single dose by gavage using a stomach tube (20 mL/kg). The mice should be fasted prior to dosing (3-4 h). Following the period of fasting, the mice were weighed, and the test substance administered. After the I3C has been administered, food withheld for 1 h in mice. I3C was tested using a stepwise procedure, each step using three mice. The dose (50 mg/kg) of the first mouse is one lever lower than starting dose. If the animal was alive, a higher dose is given to the second mouse, and if the first mouse is dead or dying, then a lower dose is given to the second mouse.
Mice were divided into different groups (n = 3 females and male/group). The control group received vehicle alone, and I3C was tested in single doses of 2000, 1750, 1500, 1000 and 550 mg/kg intragastric (i.g.) and 1000, 550, 375 and 250 mg/kg intraperitoneal (i.p.) route. The dose responsible for the death of 50% of the experimental animals (LD50) was estimated. All mice were observed individually for different parameters after I3C i.g. and i.p. dosing at least once during the first 30 min, periodically during the first 24 h, with special attention given during the first 4 h, and daily thereafter, for a total of 14 days.
According to the results of observation 48 h after administration, the time interval between treatment groups was determined by the onset, duration, and severity of toxic signs. Treatment of mice at the next dose was delayed until one is confident of survival of the previously dosed mice. Each living mouse was observed for 14 days, and the later dead mice are recorded as a death in the statistics of the results. If the mouse has no obvious toxicity after administration, the test period was 7 to 10 days. If the mouse has an obvious toxic reaction such as weight loss, it was observed continuously for 14 days after administration, and the test period is about 28 days.
The mice in the treatment group and the control group were evaluated according to the following indicators: toxic dose (LD50, deathdose curve, 95% confidence limit); symptoms; weights and histopathology. The pain score index was evaluated after I3C administration according to scoring standard of pain index for experimental animals as follow: 0: Normal hair and activity of the experimental animals; 1: part of the hair of the experimental animals was erect and temporarily arched back; 2: the fur of the experimental animals was obviously rough and intermittent arched back; 3: the fur was obviously rough, accompanied by other symptoms such as arched back, slow reaction and behaviour, and even death).
Histopathology
For histological analysis, the liver, heart, spleen, lung and kidney were fixed in 4% formalin followed by dehydration, paraffin embedding, sectioning, and standard Haematoxylin&Eosin (H&E) staining. All organs paraffin sections were viewed by light microscopy and carefully examined the number of congestions containing red blood cells in each section after both I3C i.g. and i.p. by a pathologist and marked directly on the representative H&E-stained section.
Statistical analysis
The results are expressed as the mean ± SD. The data are sorted out in Excel 2017 and SPSS 16.0. Statistical analysis of data was performed using one-way ANOVA test and Kaplan-Meier test by SPSS 16.0 Software. For in vivo acute toxicity, the toxic dose (LD50, death-dose curve lethal dose and 95% PL confidence interval) values were calculated by AOT425 StatPgm Software.
DATA AVAILABILITY
The corresponding author will provide the original data used to support the findings of this study upon reasonable request.
|
v3-fos-license
|
2023-08-10T13:07:29.743Z
|
2023-08-09T00:00:00.000
|
260737342
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcmedethics.biomedcentral.com/counter/pdf/10.1186/s12910-023-00941-w",
"pdf_hash": "c87f20771667d225bedca2edf4819f169d7f8b12",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:697",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "52bcda4bcb3041106bbacdd142389e48fdb979eb",
"year": 2023
}
|
pes2o/s2orc
|
Caring for older patients with reduced decision-making capacity: a deductive exploratory study of ambulance clinicians’ ethical competence
Background As more people are living longer, they become frail and are affected by multi-morbidity, resulting in increased demands from the ambulance service. Being vulnerable, older patients may have reduced decision-making capacity, despite still wanting to be involved in decision-making about their care. Their needs may be complex and difficult to assess, and do not always correspond with ambulance assessment protocols. When needing an ambulance, older patients encounter ambulance clinicians who are under high workloads and primarily consider themselves as emergency medical care providers. This situates them in the struggle between differing expectations, and ethical conflicts may arise. To resolve these, providing ethical care, focussing on interpersonal relationships and using ethical competence is needed. However, it is not known whether ambulance clinicians possess the ethical competence required to provide ethical care. Thus, the aim of this study was to deductively explore their ethical competence when caring for older patients with reduced decision-making ability. Methods A qualitative deductive and exploratory design was used to analyse dyadic interviews with ambulance clinicians. A literature review, defining ethical competence as comprising ethical sensitivity, ethical knowledge, ethical reflection, ethical decision-making, ethical action and ethical behaviour, was used as a structured categorization matrix for the analysis. Results Ambulance clinicians possess ethical competence in terms of their ethical knowledge, highlighting the need for establishing an interpersonal relationship with the older patients. To establish this, they use ethical sensitivity to interpret the patients’ needs. Doing this, they are aware of their ethical behaviour, signifying how they must act respectfully and provide the necessary time for listening and interacting. Conclusions Ambulance clinicians fail to see their gut feeling as a professional ethical competence, which might hinder them from reacting to unethical ways of working. Further, they lack ethical reflection regarding the benefits and disadvantages of paternalism, which reduces their ability to perform ethical decision-making. Moreover, their ethical knowledge is hampered by an ageist approach to older patients, which also has consequences for their ethical action. Finally, ambulance clinicians show deficiencies regarding their ethical reflections, as they reflect merely on their own actions, rather than on their values. Supplementary Information The online version contains supplementary material available at 10.1186/s12910-023-00941-w.
Background
Populations are ageing worldwide, and the number of people aged 80 or above is expected to triple between 2020 and 2050 [1].As people age, their bodies and minds become worn, thus weakening in a natural way to become frail [2].This results in a lowered bodily, mental, and social resistance to deal with strain and stress.Adding to this, many are affected by multi-morbidity and a high symptom burden that further limits their executive capacity and makes them vulnerable [3].At high age, heart failure, cancer and major neurological disorders are common [4], and these are often combined with varying degrees of dementia, a disease that increases in parallel with longevity and causes loss of cognitive abilities, dependency and impaired decision-making capacity [5].Despite this, older people usually value being involved in decision-making about their health care [6], and the sense of having control over their own life contributes to dignity in the midst of vulnerability [7].This includes those older people who independently choose not to exercise control and transfer their decision-making to trusted others [8].
The rising prevalence of older people affected by multimorbidity has been ascribed to increasing demands on ambulance services [9].Older patients' vulnerability makes them potentially difficult to assess in acute care, as their needs are complex, thus not only medical, but also psychological, social and existential, in combination with a lowered personal capacity to act appropriately on their own behalf [10].This group of older patients constitutes a significant proportion of non-convey patients, as they are often classified as having non-specific complaints that do not fit with the ambulance clinicians' (ACs) assessment protocols [11].ACs, who consider themselves as providers of emergency medical care working under high workloads of dispatch calls, have described themselves as being in a struggle between differing expectations when they disagree with the older patients' request for emergency care, while also enjoying taking an interest in their problems and in providing them comfort [12].The ACs' inner struggle conveys a risk of lacking empathy and seeming rude to the patients.Thus, ethical conflicts may arise when there are different care options and the older patients' best interest is difficult to discern, or when ACs, older patients and bystanders disagree on the emergency care needs [13].
As older vulnerable patients have a lowered capacity to defend and protect their rights, they are at risk of abuse and neglect by the healthcare professionals, who hold the power in an asymmetrical patient relationship [14].Thus, in nursing practice, such as in ambulance care, ACs must adhere to the four cornerstones of biomedical ethics by respecting patients' autonomy, promoting beneficence and nonmaleficence, and striving for justice [15].In practice, ethical care is performed within an interpersonal patient relationship that builds upon respect and benefits the well-being of both ACs and patients [16].This promotes caring relationships that constitute the foundation of ethics.To achieve this, ACs need to possess ethical competence, broadly defined as consisting of ethical sensitivity, ethical knowledge, ethical reflection, ethical decision-making, ethical action and ethical behaviour [17].These concepts are distinct, though closely related to each other.For instance, ethical action depends on how the carer relates to the other dimensions of ethical competence.
In summary, as more people become old and frail and are affected by multi-morbidity, this results in increased demands for the care provided by the ambulance service.Due to their vulnerability, older patients may have reduced decision-making capacity, despite still wanting to be involved in decision-making about their health care.Their needs are complex and difficult to assess, however, they do not always align with ACs' assessment protocols.When in need of an ambulance, older patients meet ACs who are under high workloads, and who primarily consider themselves as providers of emergency medical care.This places ACs in a struggle between differing expectations, and ethical conflicts may arise.To resolve these, ACs need to provide ethical care that focusses on interpersonal relationships and by using their ethical competence.To our knowledge, it is not known whether ACs possess the ethical competence required to provide ethical care.Thus, the aim of this study was to deductively explore ACs' ethical competence when caring for older patients with reduced decision-making ability.
Design
In compliance with the aim, a qualitative deductive and exploratory design was used to analyse dyadic interviews with ACs.
Setting
The publicly funded healthcare system in Sweden is divided into 21 healthcare regions, which organize the Finally, ambulance clinicians show deficiencies regarding their ethical reflections, as they reflect merely on their own actions, rather than on their values.Keywords Ambulance clinicians, Ambulance service, Content analysis, Decision-making, Ethical competence, Older patients ambulance service based on conditions and needs in each region.The number of ambulance stations and their location differ between the regions, based on variations in population density and geographical conditions.The study was conducted in a southeast Swedish region where the ambulance service provides care for approximately 203 000 inhabitants in rural and urban areas on a total of 8 458 km 2 .The overall population density was 24/km 2 .The region had eight ambulance stations with a total of 17 ambulances providing Advanced Life Support (ALS).The ambulances were staffed with two ACs, at least one of whom was a specialist ambulance nurse (SAN).
The education and training to become a SAN comprises a 1-year master's degree and a postgraduate diploma in specialist nursing for registered nurses (RNs).In order to qualify for the programme, students must be registered as an RN with a Bachelor of Science degree, including specialisation in caring or nursing science.The staff within Swedish ambulance services consists of specialist trained registered nurses, registered nurses (RN) and emergency medical technicians (EMT).Each ambulance must be staffed by at least one licensed AC.In 2021 the Swedish ambulance service consisted of 53% specialist trained RNs, 28% RNs, and 19% EMTs.SANs and other specialist trained nurses have undergone a 4-year university education, i.e., three years to become RN and one year for specialist education.EMTs typically have 2 years of high school nursing education, which is supplemented by a 6 month to 1 year ambulance care specialization.In the present study, the distribution in the participating region was 78.5% specialist trained nurses (dominated by SANs), 15.5% RNs, and 6% EMTs.The total number of employed ACs was 148 [18].
Patient care in the Swedish ambulance services generally focuses on the patients' physical and biomedical status by using the A-E principle (airway, breathing, circulation, disability, and exposure), observing vital signs, and listening to the patient's perceived symptoms of illness.Most ACs use the Rapid Emergency Triage and Treatment System (RETTS) to assess the patient's medical care needs.In addition to these general guidelines, there are a number of clinical practice guidelines and concepts for assessment, prioritization and treatment of specific situations and conditions.Among these can be mentioned guidelines for home-based self-care, non-conveyance of the patient, prehospital trauma life support, advanced medical life support, major incident medical management and support, prehospital medical management, and ongoing deadly violence [19].
Participants
The participants were recruited based on a convenience principle.Employed ACs (n = 35) were informed about the study by the second author in a staff meeting or by their head in regular staff meetings.Those who showed interest but did not participate in the staff meetings were contacted by the second author.The inclusion criteria were clinically active ACs with professional affiliation as RNs with or without specialist education in ambulance care, anaesthesia care, or intensive care, or as EMTs.
In total, the 30 ACs who agreed to participate were assigned to constellations of 15 interviews.Participating EMTs (n = 4) had a mean age of 57 years (range 47-65) while the RNs' (n = 26) mean age was 41 years (range 30-56).The work experience of the EMTs was 31 years on average (range 16-40), while the RNs had an average of 11 years of work experience (range 1-28).
Data collection
Dyadic interviews were chosen to give participants the opportunity to share their thoughts and feelings with each other during the joint interview.Dyadic interviews aim to examine the participants' experiences when brought into dialogue, and contribute their perspective to the whole.This enables researchers to capture nuances and characteristics in a way that is difficult in individual interviews [20].The idea of dyadic interviews in the present study was also to mimic the ambulance team and reach similar discussions in the dyad.Some dyads were made up of the ambulance team that normally worked together, while other dyads were put together only for the interview occasion.Interview data were collected between December 2019 and February 2020, by use of a case vignette technique, that is, providing short descriptions of situations with specific circumstances for participants to reflect upon [21].This method is relevant when studying professionals' actions, as it generates knowledge of their ideas, explanations, values, norms and ethical positions [22].A three-step case vignette, based on an emergency prehospital situation mirroring an ethical dilemma, was used (see supplementary file, Table S1).The vignette was constructed specifically for the present study and was based on literature reviews, methodological literature, and a critical review of the authors' experiential knowledge.
The vignette was presented following a joint structure in all interviews, where steps two and three of the vignette were presented to the participants when their narratives subsided.Open-ended follow-up questions were posed, such as: "How do you assess the patient's decision-making ability"?Follow up questions were asked to elaborate further on the ACs' understanding of older patients' self-determination when caring for those with reduced decision-making ability.The interviews (n = 15) were recorded and lasted 35-77 min (mean = 61 min).They were performed by the second author and transcribed verbatim by a professional transcriber.
Data analysis
This study is a secondary analysis of a rich dataset.A primary analysis with an inductive and meaning-seeking thematic approach will be published elsewhere.A deductive content analysis [23] was performed, starting with the first author listening to the interview recordings and reading transcripts to obtain a sense of the whole and become familiar with the data.A literature review by Lechasseur et al., including 89 articles defining ethical competence in the context of nursing practice [17], was used to construct a structured categorization matrix.The concepts of ethics in nursing defined in the review then formed the main categories, while explanatory conceptrelated text drawn from the review was used to generate headings for the sub-categories (Table 1).
A search for content that corresponded with the matrix's main categories and sub-categories was performed, and the extracted data were placed in datasheets.The categorization of the data followed the structure of the matrix, resulting in the development of the six main categories and sixteen sub-categories presented below.The first author performed the analysis.
Ethical sensitivity
ACs identify the limited time spent with older patients as an ethical problem that may result in actions that overrule the patients' explicit desires and caring needs.To counteract this, they interpret the patients' needs by compassionately observing and listening.a. ACs evaluate older patients' glances and gestures to note whether there is an immediate and positive connection or whether the patients reacts by withdrawing.If the patients withdraw when touched, it is tentatively interpreted as unwillingness to accompany the ambulance and the declining of care.Also, the reactions of significant others and care staff are observed to assess whether ambulance transport was ordered in consultation or was a unilateral decision.If the patients remain passive, this may be interpreted as having given in to the pressure of others.b.Older patients' needs are primarily interpreted from what they tell ACs.Thus, the AC's initial questions are open-ended and aimed at obtaining the patients' own description of their condition, but also to assess their cognitive status.If impaired cognition is obvious, ACs turn to significant others or care staff to interpret the patients' needs.The patients' behaviour is noted and serves as a guide in the assessment but needs to be verbally confirmed to be considered reliable.The behaviour of bystanders is also interpreted and contributes to the assessment, for example if there is great urgency in packing personal belongings to send with the patients.c.ACs use their compassion to assess the situation.They empathize with the older patients and try to imagine their situation of weakness and vulnerability, understanding that their ability to make their will heard may be limited.They therefore side with the patients and their right to make independent decisions, even if it may mean that they refuse care.
I think you have to respect her will, if it's the care she doesn't want. That she wants to finish. Maybe she's tired of her suffering ... she's certainly been in hospital a lot, in and out … (Interview 4)
ACs feel sorry for older patients who are transferred to new environments at inconvenient times while suffering from symptoms that may be exacerbated by the transition.Likewise, they experience uneasiness when carrying out treatments that may harm the patients' bodies.Similarly, ACs describe having a guilty conscience when they provide CPR to patients whom they deem to have little chance of survival.ACs understand that significant others can react negatively based on a lack of healthcare experiences, internal conflicts when receiving less information than others, or lack of insight due to making few visits to the patient.Significant others' desire to keep their loved one alive can explain their inability to accept the patient's deterioration, thus urging the ACs to do everything in their power to save these patients' lives.d.ACs identify the short time they spend with the older patients as an ethical problem that hampers their ability to determine the patients' cognition and decision-making ability.This becomes more difficult when the patients are so lacking in consciousness that they cannot express their will.ACs turn to significant others or care staff, which can then lead them to transporting the patients to hospital against their will.This problem can be accentuated when patients are dying and communication about former care decisions is lacking:
I had no idea there was a decision on palliative care and so we just went in and demolished what patient and doctor planned about fourteen days ago. I think that's hard. (Interview 6)
This can mean that dying patients are subjected to CPR and the stress of being transported, despite having a Do Not Attempt Resuscitation (DNAR) order.
Ethical knowledge
ACs mention ethical concepts, but do not elaborate on their meaning.Rather, they rely on their own personal experiences of how older people function.Conversely, they also possess knowledge about contextual possibilities for care, and the value of establishing a mutual relationship that clarifies the patients' perspective.a. ACs describe how it is important to create a mutual and intimate relationship with older patients, partly to ascertain their cognitive status, but above all to gain knowledge about their personal wishes.Thus, ACs initiate a dialogue to lower the influence that their own interpretations may have on the situation.In conversations, it is important to listen and be sensitive to the patients' thoughts and experiences.To achieve this, ACs ask investigative questions to encourage storytelling and patient participation.In such conversations, the patients' trust is expected to grow.Meanwhile, privacy is assured by the colleague who asks others to leave the room.ACs also pose general questions about other things, as this is believed to make the patients relax.At times, making special arrangements are beneficial:
I proposed we should have a cup of coffee and sit down and reason a little bit about this. And we did. And all of a sudden when you sit with a cup of cof-fee, everything becomes much easier to solve. Then you break down these roles, the ambulance roles, the family roles and the role of the patient, you become involved on a completely different level. (Interview 7)
If the patients have difficulties with speaking, ACs talk to significant others or staff who may know the patients well.In all conversations, it is important to be clear and explain until everyone has understood what options exist to resolve the situation.b.ACs imagine the outside world from the perspective of the patients, which makes them understand that older, ill patients do not always have the strength to make their will heard in competition with the voices of the strong and healthy.ACs describe that there is a risk of making the patients feel anxious when they are being transferred to an ambulance, then further transported to a hospital, where they are usually unfamiliar with both the context and the people surrounding them.Patients who are cared for at home are believed to find it easier to be self-determining, as home is most often perceived as a safe place.c.Taking life-saving measures with multi-diseased and dying patients, followed by strenuous ambulance transport to hospital, was described as being meaningless, only serving to increase the patients' suffering.Treatment can alleviate suffering, but so can limiting the amount of care provided.Transporting patients to hospital against their will was described as an assault, as the patients' statutory right to exercise self-determination over their own body and life then is ignored.Additionally, ACs risk violating the patients' dignity when exposing their bodies to heavy-handed treatment, i.e., CPR.d.ACs take the environment on site into account when making conclusions, i.e., when observing an older patient who receives help with oral care and concluding that this person is at the end of life.Likewise, they have a contextual awareness of other professionals' competence.Once a palliative care team is involved, ACs assume that the patients have planned to die at home, surrounded by a multi-competent team of registered nurses, doctors, and care staff, who possess the same medical resources found in hospitals, and are able to explain end-of-life issues to significant others.In home care, there is a lack of time for providing such attentive care, as municipal registered nurses are responsible for a large number of patients.However, the municipal home care staff is expected to have good knowledge of how to provide good basic care, and thus easily be able to continue care when ACs have relieved any acute problems.When acute illness occurs in nursing homes, care staff are often perceived as insecure and uninformed, which explains why they call for an ambulance.However, ACs think that it may be better for patients to remain at the nursing home, as there is access to drugs and round-the-clock monitoring.Moreover, ACs have lay knowledge based on their own personal experiences, knowing that older patients often become confused and distressed when hospitalized.Further, older people are assumed to come to a point when they peacefully accept that life is over and prefer to die calmly at home.ACs experienced that measures with good intentions taken against the patients' will can have negative consequences: When mom had cancer, they wanted to take a biopsy on the tumors ...She let them do it, but didn't want them to take anything away, because she'd rather die earlier and keep her quality of life.There they overruled her, because they picked off tumors anyway.Then she wasn't clear in her head anymore and that feels bitter now.After all, they did their best, thinking that there was nothing to lose.But there was ... (Interview 3) Therefore, ACs advocate that one should always comply with the wishes of the older patients.
Ethical reflection
ACs' internal reflections primarily concern their own actions, not their values.They constantly consider risks and benefits to older patients, but also regarding themselves.In their reflections, their organization is medicaloriented, but with time and experience they can earn the courage to make more holistic nursing assessments.a.In situations where older patients are acutely ill or have suffered cardiac arrest, ACs make ethical considerations regarding the need for CPR and hospital care.Some ACs say they lack choice, as their guidelines are strictly medical.Therefore, they treat and transport older severely ill or dying patients and provide CPR even if it feels wrong.Other ACs believe that older patients' self-determination is superior to the guidelines and refrain from providing, or discontinue, treatment when necessary to protect the patients' dignity: I was given a regular transport, a patient with breathing problems, and found an older man with Cheyne-stokes breathing.I said: 'We can't drive him in this condition, he can die on the road!' We stopped and held his hand while he died peacefully.I have no problem making such decisions, because I look at the ethical.We do not have eternal life, that's important to remember.Not fight to keep them alive but let them finish in a good way.( Interview 7) b.When reflecting on their role, ACs describe themselves as older patients' advocate.They have been called to the site for their sake and are therefore prepared to fight to defend their will.At the same time, their mission is to save lives, work quickly, and transfer patients who sometimes have long transport times to hospitals.They are trained to solve problems, to start from metrics and to follow guidelines instead of valuing the patients' quality of life.In the ACs' reflections, the ambulance culture values medical knowledge more highly than nursing knowledge.Therefore, it feels wrong not to transport the patients to the hospital when someone has raised the alarm.
ACs describe themselves as having authority, based on their competence and education.This means that their medical assessments are more knowledgeable than those made by older patients, significant others and other staff.In their reflections, this is why they are not always responsive to patients' wishes, but instead persuade them into complying with their own suggestions.Sometimes, ACs stand between the will of the patients and their significant others, which makes decision-making difficult.However, deciding whether the patients are going to hospital or not, is occasionally considered the work of others: This lady has a nurse from the palliative team who looks after her.And I think, they have to make that decision!We wouldn't even have to interfere in it, but be able to say: 'We'll wait outside, and you'll tell us when you've decided whether or not she's going to join' .( Interview 10) At the same time, this is described as being a difficult situation, where ACs are afraid to make mistakes, and risk being reported and lose their right to practise their profession.They consider it their duty to inform about alternative solutions to facilitate informed decisions, but, in uncertain situations, they prefer to take the patients to hospital to protect themselves from disciplinary actions.
In the ACs' reflections, despite prioritizing medical assessments, the ability to make holistic nursing assessments can grow over time and make an AC confident enough to refrain from CPR, to question doctors, and take sides with older patients who want to stay at home.Experienced ACs possess both medical and nursing experience that recent graduates often lack, therefore, those with less experience must rely entirely on clinical metrics.Thus, there is a desire for ACs to be given a greater scope for making nursing assessments in practice, and they also describe that having more knowledge about multi-diseased older patients and ethics is necessary.c.ACs describe feeling ambivalent about decisions they have made when choosing to follow older patients' will, contrary to their guidelines.In their internal reasoning, they ask themselves what benefits alternative measures could have had.In some cases, they defend paternalism because the patients did not understand their own best interests, and in other cases, they regret their actions:
Who am I to decide when people should receive care or not, if they are not at the full use of their minds? I can't decide who will live or die. (Interview 6)
In order to prevent negative consequences, ACs carefully document their own actions and the patient's wishes in the medical record when their actions deviate from medical guidelines stating that the patient's condition should be assessed and treated in hospital.In cases when they are convinced that the decision is in the patient's best interest, they may even adapt the documentation to protect themselves against disciplinary actions.
Having conversations with other colleagues facilitates important reasoning that helps ACs to be satisfied with their actions.Having a discussion and mutual planning care is initiated as soon as ACs receive the initial information about the assignment, which creates a common approach.When meeting older patients, there is a continued discussion about what measures are judged to be best.Establishing unity between colleagues provides important emotional support, where a glance or a nod may be enough.After completing the assignment, it is valuable to talk it through together to confirm that decisions were appropriate.Sometimes colleagues cry together.Therefore, openness, lack of prestige and honesty are described as being important prerequisites for being able to foster professional development in a context where you are thrown between extremes on a daily basis.d.ACs constantly consider the risks and benefits of their practices to older patients and balance these against the needs of society, especially a parallel or possible need in patients with more serious conditions than those presented by the older patient to whom they have come.
To older patients, it is not considered useful to be transferred to hospital at the end of life, as they often have long waiting times, just to be sent back without any care measures being taken.According to ACs, hospital transports for older patients causes nothing but strain:
I probably put more effort into persuading a 70-year-old than a 90-year-old to come along. A 90-year-old does not survive abdominal surgery in the same way, for example. The odds aren't quite as great. (Interview 4)
Furthermore, ACs believe that the palliative care older patients receive in their homes is equivalent to hospital care.From a societal perspective, ACs carefully consider older patients' need of hospital care, as their stay burdens the economy and adds to the staff 's workload.Additionally, when older patients are admitted to hospital, they occupy hospital beds that could have been used for other patients with greater needs.Finally, transporting older patients to and from the hospital means that the ambulance is not available for emergency situations.Thus, ACs prefer to solve the situation by relieving the patients' symptoms on site and coordinating the continued care with other actors.
In the process of choosing between several options, ACs rely on their routines and professional experience. In relation to others, this may entail the provision of information, or persuasion in favour of ACs' opinions and values.
In situations with multiple possible courses of action, ACs undergo a process to devise a reasonable and responsible alternative.Initially, older patients are visually assessed upon the ACs' arrival.These impressions are processed and compared to previous experiences to provide an overall picture of the situation.If an older patient has suffered from cardiac arrest, decisions are made in hasty consultations with present significant others or staff, to assess the patient's prognosis.When the patients can communicate, questions are asked while the patients are being examined.In the event of reduced communication ability, ACs turn to significant others or care staff on site, or contact doctors to get more information about the patients' status and anamnesis, whether previous decisions about palliative care at home have been made and if there is a DNAR order.When the patients want to stay at home while those around them want them to go to hospital, ACs can take the patients' side: Patients are always subordinated.This can be used to their advantage, as we still have our experience with us.Many times relatives listen when we explain that they will do nothing about this in the hospital, it is better for her to stay at home in her bed.Call the health care centre tomorrow instead.( Interview 10) If patients, significant others, or care staff have opposing views, ACs try to facilitate decision-making by clearly informing them about possible consequences, with the ambition to create an in-depth understanding of the situation.When ACs leave the site without the patient, they make sure that they have provided contact information for another, more suitable, healthcare provider.
Ethical action
For ACs, ethical action merely means to provide needed care at site and avoid unnecessary actions.They do not talk about care in the ambulance or at the hospital in terms of ethical action.
AC's actions are sometimes about doing nothing at all, for example when an older, multi-diseased person has suffered cardiac arrest.This can cause ACs to make a decision above their authority, but one which is still deemed necessary, as the patient's life is over.At other times, it is about mediating, reasoning, or making a stand with significant others or doctors who want to act against the patients' will.When the patients' status is difficult to assess and the available information fails to allow for this mediation, ACs take the patients to hospital for medical assessment.In situations where the patients are competent in making decisions and refuses ambulance transport, or has previously made a decision to be cared for at home, ACs strive to provide symptom relief on site: If there is something you can do at home, it is better to take care to them, than them to the care.(Interview 2) The patients are then left at home in accordance with their own wishes, which ACs describe as being an obvious measure.
To behave ethically, ACs trust an inarticulate, intuitive gut feeling that helps them perceive nuances in other people's demeanour. They show respect by use of body language, listening and allowing interactions with others to take time.
a. Showing respect for others means respecting the will of older patients, which is largely about how ACs behave on site.Sitting down with the patients to have a conversation instead of standing up near the entrance door means showing respect.Similarly, ACs show respect by staying and conversing for a while, even with patients who decline transport to hospital.It is described as being respectful to clearly convey to the patients when ACs turn to significant others for more information, instead of doing it secretly.In addition, ACs show respect for the patients by marking verbally when significant others are assertive:
When entering a room, the patient often lies down and the relatives are in one's ear all the time. Sometimes you almost can't think because they talk so much. And then I can say a little demonstratively: 'Thank you very much, but now I want to listen to NN and hear what he says. ' You might step on someone's toes a little bit, but the most important thing we have is to stand up for the patient. (Interview 12)
Thereafter, ACs show respect to significant others by listening to them, too.This respectful behaviour also applies to colleagues in other settings, as ACs sometimes rely on their assessments rather than making their own.b.Behaving in a controlled and moderate manner is about being courteous towards older patients, significant others, and care staff, i.e., when proposing a different solution than theirs.To lay the foundation for wise joint decisions, ACs convey security to those they meet and start conversations on an equal level.This often requires time, regardless of ACs' stress, hunger or tiredness.In such conversations, ACs talk in ways that are assumed not to harm or frighten, while giving the other party time to realise the situation.It is described as being important to dare to initiate a conversation, even if the situation is unpleasant, with aggressive responses.If patients or significant others behave agitatedly, ACs avoid conflicts by taking a step back and allowing space for emotional reactions.c.Being responsive is about perceiving the atmosphere in a room, for example, if something indicates an ongoing conflict.It is also about noticing colleagues' reactions, as they may have perceived something that needs to be considered.Responsiveness also means to sense the older patients' state of mind to understand whether the patients present a genuine wish or if they only want to please someone else.ACs describe having an ability to interpret the patients' silence, as approval, resignation or despair.Sometimes they sense that the patients want to convey something unspoken.ACs find this capacity for responsiveness difficult to put into words, but describe it as an intuitive gut feeling:
I couldn't decide what he wanted, it was just a feeling I got. It's not always, but in many cases you become sure then. Many times you should trust this feeling, what feels right in your heart. You shouldn't belittle that, because that's often what makes it good. It is not always possible to follow a rulebook. (Interview 12)
d. ACs confirm the concerns of older patients and significant others by answering questions, thus signalling that their worry is understandable.Many become calmer as soon as ACs arrive on site, which is reinforced by an examination of the patient or an assurance that measured values are normal.The concerns of older patients may be due to memories of being treated rudely at the hospital.ACs then alleviate their concerns by promising to report this on arrival, thus preventing it from happening again.When patients are left at home, ACs sometimes offer to return for a supervisory visit, or reassure the patients that they are welcome to call again: Just because we've been there once, the door isn't closed for good so you can't call any more.It's not like that.You have the opportunity to come back if it gets worse, or if something new should happen.
(Interview 10)
Alternatively, ACs help the patients to arrange an appointment at the healthcare centre to make them feel calm when they leave.To alleviate the worries of significant others, ACs carefully explain what they have done and what measures they have taken to examine the patient.In this way, significant others are actively participating in the care, which is designed to have a calming effect, especially when the patients are seriously ill.
Discussion
The results show that ACs possess ethical competence, used when caring for older patients with reduced decision-making ability.This competence is primarily characterized by ethical knowledge regarding the importance of a relationship, which manifests itself in a desire to investigate what the patients want and adapt care accordingly to safeguard the patients' self-determination.This aligns with an earlier study, showing that ACs attempt to respect older patients' self-determination by collaborating with them [24].Such promotion of patient participation in the planning of their own care has been described as an indicator of competent nursing [25].To promote a trusting relationship, ACs describe the importance of listening to the patients and study their reactions to also perceive the unspoken.This creates conditions for a caring encounter, based on presence, recognition, availability and mutuality [26].Within such a trustful relationship, a carer and a patient can perform an authentic and honest dialogue that creates a space of togetherness that leads to mutual well-being.To make sure of the older patients' genuine will, ACs use ethical sensitivity, which helps them interpret the patients' needs by listening to what they say and by observing how they and other people nearby behave.In this, ACs describe themselves as being the patients' advocate, despite a constant time pressure, and regardless of their own stress, hunger or tiredness.The ACs' focus on trustful relationships and overall responsibility can be described as being a holistic care that views older patients as biopsychosocial beings who need to be included in the planning and decisions about their own care [27].
In order to create security and enable an open communication, ethical behaviour is shown through respect, courtesy and control, even in situations when ACs are treated aggressively.According to Lechasseur et al., respectful behaviour is an important dimension in holistic care and a sign of a developed patient-carer relationship [17], which confirms the ACs' ethical competence.As part of this ethical behaviour, ACs describe themselves as working from a gut feeling that helps them to navigate emotional situations.They find it difficult to put this ability into words, but it can be described as an ethical ability to intuitively be touched by the feelings of others and as being able to identify with their distress [15].This competence is, according to the International Council of Nurses [28], expected to be held by all professional nurses, who are also expected to use this skill to contribute to ethical organizations and to question unethical ways of working.Interestingly, the ACs in this study, who do question their own organization's focus on medical assessment and care measures and desire a greater scope for holistic nursing judgements and more knowledge of ethics, do not define their own ethical ability as being the fruit of performing professional practices.This reveals shortcomings in their holistic perspective that may hamper their ethical competence.Consequently, not acknowledging ethical competence as a form of professionalism may hinder them from reacting to unethical ways of working.
ACs tend to possess a degree of ethical competence, both in respecting the patient's autonomy and making decisions in line with the patient's best interests.In decision-making for the patient's best interests, some describe themselves as having the authority, based on experience and knowledge, that entitles them to also persuade patients and significant others who do not want to follow their recommendations.This might be beneficial, as an AC who trusts in his/her own competence and experiences is more likely to gain the patients' trust and may find it easier to apply a caring approach [29].Alternatively, this attitude can be described as a risky exercise of power or paternalism, that is, to prevent the patient from having a choice on the basis that this will not be in the patient's best interest, grounded in the assumption that the patient cannot make a well-considered decision [30].This can be said to reveal shortcomings in some ACs' ethical reflection that leads to problems with ethical decision-making.ACs' paternalistic attitudes can be questioned as, according to the study results, they occasionally base their reasoning upon their own personal experiences of how older people function and think.This indicates a lack of ethical reflection, as ACs in this study appear to consider it always wrong to convey patients who do not want to go to hospital, thus it is wrong to persuade them.This raises the question of whether paternalism is always wrong.As shown by Nordby [31], ACs can use their knowledge and experience to look forward in time and assess possible outcomes, in ways that the patients cannot.Thus, if they foresee that compliance with an upset patient's wishes may convey future health risks, their persuasion can in fact be understood as respecting the patient's autonomy, particularly if the patient would otherwise agree after having regained a more sober perspective at a later and less acute stage.This way of protecting patients form the harmful consequences of following their involuntary choices can be termed as soft paternalism [30].As actions in themselves cannot be paternalistic, one should look at the motive behind them.Hence, when the motive is respectful and aimed to protect patients from future harm, one may conclude that not all persuasion is paternalistic in a way that threatens the patients' autonomy, dignity and integrity.
An aspect indicating a lack of ethical knowledge and ethical reflection concerns the views and preferences of older patients.ACs in this study seem to assume that older patients with impaired decision-making ability feel best about being cared for in their home, not in an ambulance or in a hospital.This is questionable, as other studies indicate that ambulance service assignments frequently involve older persons from the age of 65 and older [32], which means that the age group can include two generations, with widely different life experiences and wishes for their care.Further, ACs in this study prefer to provide care on site and avoid taking older patients to hospital, regarding their hospital stays as often unnecessary, and a burden to society.This indicates deficits in their ethical acting, as no account is seemingly taken of the complexity that older patients' multiple diseases can present.The illness trajectory for a patient with heart failure is different from the one of cancer patients, or patients affected by frailty and/or dementia [33].Being multi-diseased, older patients can follow all these trajectories at the same time, which may make their symptoms more difficult to assess.Therefore, it cannot be denied that many of them could benefit from being provided care in the ambulance, a space that older patients have described as containing advanced resources and competent staff who can provide the needed aid and safety in a vulnerable situation [34].Thus, ACs' tendency to leave older patients at home can be seen as an expression of ageism, signifying the discrimination against, and prejudicially stereotyping of, older people [35].This can include a behavioural component, where ACs assume that all older people are generally vulnerable and weak, treating them as such, and thereby resulting in discrimination against them.The ACs' actions may seem empathetic, but older patients have been shown to cherish their freedom [36].Thus, regardless of bodily weakness, they often want to be involved in decisions concerning their lives.Then, ageist actions, despite being performed in a gentle manner, may make older patients feel ignored and objectified.Adding to this, older people may possess an inner strength, acquired throughout the various struggles of a long life, that helps them uphold their decisionmaking capability [37].Consequently, many of them may be less vulnerable than their outer appearance indicates.ACs in this study seem to disregard this inner strength of older patients, which reveals some shortcomings in their ethical reflection, which may then have an impact on their ethical actions.In their defence, it can be said that they are aware of their lack of knowledge about multi-diseased older patients.
In situations of emergency care, i.e., when an older patient has suffered a cardiac arrest, ACs reveal a strong desire to protect themselves, by treating and transporting dying patients, not primarily for the benefit of the patients, but in order to follow their medical guidelines and prevent critique and disciplinary actions.Thus, when reflecting about their own role, they merely reflect on their own actions and not their values, which reveals a lack of ethical reflection.What seems to be needed here is the virtue of courage, that is, the middle course between cowardice and recklessness [38].Nevertheless, ACs in this study expect their ability to make holistic assessments to grow with time and experience and develop their courage and ability to stand up for the patients' will.This is congruent with the findings of an earlier study, highlighting that ACs' trust in themselves can grow over the years and develop an inner security and confidence in their professional role [29].In this study, ACs reflect on common collegial communication as being something valuable, one which prepares their actions and confirms their reasoning after a completed assignment.This aligns with an earlier study describing ACs' work in dyadic teams to increase their confidence, broaden their experience and generate clarity on their way to the patient [39].Consequently, well-functioning dyadic teams appear to be a prerequisite for developing ethical competence when caring for older patients with reduced decision-making ability.
Thus, in spite of elucidating the importance of the patient relationship, having developed an ethical competence to achieve a holistic and relational care in some respects, ACs in this study also appear to lack sufficient knowledge regarding the complexity of relational ethics, in terms of paternalism, shared decision-making and ageism, and of multi-diseased older patients.If they had a deeper understanding of these aspects, and consequently a more extensive ethical competence, one may anticipate that they would find it easier to cope with difficult situations when caring for older patients with reduced decision-making capacity.
Limitations
This study has some limitations.First, it can be considered a limitation in trying to capture the phenomenon of ethical competence when caring for older patients with reduced decision-making ability in a limited healthcare context such as ambulance care.The transferability of the results to other contexts should therefore be done with some caution, also considering the relatively limited sample size from one region of one country.However, there is reason to assume that the results are transferable to contexts with similar demographics, resources, healthcare, and ambulance services with registered nurses with similar level of education and competence as Swedish nurses.Second, ethical competence is multidimensional and there is disagreement about which dimensions should be included in the competence.However, the choice of a deductive approach and use of a definition of ethical competence in the context of nursing practice should have increased the possibility to compare ethical competence between healthcare contexts in the future if studies replicate the same approach.The use of a welldescribed vignette in the interviews also increases the replicability of the study.Third, the first author is a registered nurse with experience in nursing home contexts, thus lacking experience within ambulance care.This lack of pre-understanding may have influenced the analysis, as preunderstanding is an asset that helps researchers to understand the data [40].However, the two other authors are experienced ACs, and, in critical discussions performed within the research group as a whole, the common pre-understanding was broadened and enrichened by the first author's experiences from another care context that concerns older patients.The conscious reflective stance in the data analysis is deemed to have strengthened the validity of the study.Further, engaging in critical discussions with other researchers has also contributed to the trustworthiness of this study [23].Finally, the reliability of the study is judged to have been strengthened by the accuracy and reporting of the research process and the vignette, and the use of a definition of ethical competence in the data analysis.
Conclusions
ACs possess an ethical competence in terms of ethical knowledge that highlights the need of an interpersonal relationship with the older patients.To establish this relationship, they use ethical sensitivity to interpret the patients' needs.Doing this, they are aware of the importance of ethical behaviour, that is, to act respectfully and provide the time needed for listening and interaction.However, they fail to see their gut feeling, upon which they rely in precarious situations, as a professional ethical competence, which might hinder them from reacting to unethical ways of working.Further, they lack ethical reflection, for instance, regarding the benefits and disadvantages of paternalism, which reduces their ability to perform ethical decision-making.Another aspect of reduced ethical knowledge is their ageist approach to older patients, shown by their opinion that older patients feel best when being cared for in their home, which has consequences for their ethical acting.Finally, ACs show deficiencies in regards to their ethical reflections, as they reflect merely on their own actions, rather than on their values.
List of abbreviations AC
Ambulance clinician DNAR Do Not Attempt Resuscitation order
Ethical competence 1 . 3 .
Ethical sensitivity a) Evaluates and interprets reactions and feelings b) Interprets needs based on what is said and behaviours c) Uses compassion d) Identifies ethical problems 2. Ethical knowledge a) Has relational knowledge -emphasizes the importance of mutuality, relationship and curiosity b) Has knowledge of embodiment -emphasizes the importance of dealing with bodies as lived subjects c) Has philosophical, theoretical, and practical knowledge d) Has contextual awareness and lay knowledge Ethical reflection a) Reflects on ethics in consideration between courses of action b) Reflects on the own role and task as ambulance carer c) Has an internal reasoning that clarifies own values d) Balances riskbenefit based on prioritization, equality and morality 4. Ethical decision-making Undergoes a process that leads to a reasonable and responsible choice between several options 5. Ethical action Acts on the basis of knowledge, reflection, analysis and decision-making 6. Ethical behaviour a) Shows respect for others b) Behaves masterfully and moderately c) Is responsive d) Confirms the other's concerns
|
v3-fos-license
|
2018-12-18T11:15:19.106Z
|
2015-09-07T00:00:00.000
|
56456615
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.atmos-chem-phys.net/16/703/2016/acp-16-703-2016.pdf",
"pdf_hash": "5b9dbf9509b111614512719b1bc39b6272eebdbd",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:702",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "50324fe7d8fbae863b7713c3733bf4c856eff85f",
"year": 2015
}
|
pes2o/s2orc
|
Solubility and reactivity of HNCO in water : insights into HNCO ’ s fate in the atmosphere
A growing number of ambient measurements of isocyanic acid (HNCO) are being made, yet little is known about its fate in the atmosphere. To better understand HNCO’s loss processes and particularly its atmospheric partitioning behaviour, we measure its effective Henry’s Law coefficient K H with a bubbler experiment using chemical ionization mass spectrometry as the gas phase analytical technique. By conducting experiments at different pH values and temperature, a Henry’s Law coefficient KH of 26± 2 M atm is obtained, with an enthalpy of dissolution of −34± 2 kJ mol, which translates to a K H of 31 M atm at 298 K and at pH 3. Our approach also allows for the determination of HNCO’s acid dissociation constant, which we determine to be Ka = 2.1± 0.2× 10 −4 M at 298 K. Furthermore, by using ion chromatography to analyze aqueous solution composition, we revisit the hydrolysis kinetics of HNCO at different pH and temperature conditions. Three pH-dependent hydrolysis mechanisms are in play and we determine the Arrhenius expressions for each rate to be k1 = (4.4± 0.2)× 10 7 exp(−6000± 240/T ) M s, k2 = (8.9± 0.9)× 10 6 exp(−6770± 450/T ) s and k3 = (7.2± 1.5)× 10 8 exp(−10 900± 1400/T ) s, where k1 is for HNCO + H + + H2O → NH + 4 + CO2, k2 is for HNCO + H2O → NH3 + CO2 and k3 is for NCO − + 2 H2O→ NH3+ HCO − 3 . HNCO’s lifetime against hydrolysis is therefore estimated to be 10 days to 28 years at pH values, liquid water contents, and temperatures relevant to tropospheric clouds, years in oceans and months in human blood. In all, a better parameterized Henry’s Law coefficient and hydrolysis rates of HNCO allow for more accurate predictions of its concentration in the atmosphere and consequently help define exposure of this toxic molecule.
Introduction
Until recently, the interest in studying HNCO was from a fundamental science perspective with research conducted on its structure, preparation and physical properties (Belson and Strachan, 1982) and on its theoretical rovibrational spectra (Mladenović and Lewerenz, 2008).Both theoretical and experimental data indicate that HNCO is the most stable CHNO isomer with a near-linear π -bond system (Hocking et al., 1975;Jones et al., 1950;Poppinger et al., 1977).Roberts et al. (2010) reported detection of HNCO using negative ion proton transfer chemical ionization mass spectrometry (CIMS) from laboratory biomass burning and later determined its emission factor to be 0.25-1.20 mmol per mol of CO for different types of biomass fuels (Veres et al., 2010).Shortly afterwards, the same authors reported the first ambient atmospheric measurements of HNCO in Pasadena, California, reaching 120 pptv and raising concerns of HNCO exposure due to its toxicity (Roberts et al., 2011).Indeed, HNCO has been observed to cause protein carbamylation leading to cardiovascular disease, rheumatoid arthritis and cataracts (Beswick and Harding, 1984;Lee and Manning, 1973;Mydel et al., 2010;Wang et al., 2007).
Since Roberts et al.'s initial measurements, ambient HNCO has also been measured in Boulder and in Fort Collins, Colorado, (Roberts et al., 2014), in Toronto, Ontario (Wentzell et al., 2013) and in Calgary, Alberta (Woodward-Massey et al., 2014).HNCO has also been detected simultaneously in the gas phase and in cloud water in La Jolla, California (Zhao et al., 2014).From these studies, typical urban concentrations range from below detection limits to approximately 100 pptv, whereas concentrations as high as 1.2 ppbv, enough to be of health concern, have been measured in air N. Borduas et al.: Solubility and reactivity of HNCO in water masses impacted by biomass burning in Boulder, Colorado (Roberts et al., 2011(Roberts et al., , 2014;;Woodward-Massey et al., 2014).
HNCO has a variety of anthropogenic and biogenic sources to the atmosphere.HNCO has been quantified from diesel engine exhaust (Kroecher et al., 2005;Wentzell et al., 2013) and light-duty vehicles (Brady et al., 2014) as well as from biogenic sources such as biomass burning (Roberts et al., 2010(Roberts et al., , 2011(Roberts et al., , 2014;;Veres et al., 2010).There also exist secondary sources of HNCO to the atmosphere, including the gas phase oxidation of amines and amides by OH radicals producing HNCO via H-abstraction mechanisms (Barnes et al., 2010;Borduas et al., 2013Borduas et al., , 2015)).Evidence of secondary sources of HNCO has also been demonstrated in the field, with peak HNCO concentrations occurring during daytime (Roberts et al., 2011(Roberts et al., , 2014;;Zhao et al., 2014).
The sinks of HNCO however remain poorly constrained.HNCO has a lifetime of decades towards OH radicals in the atmosphere as estimated by extrapolating high temperature rate coefficients to atmospheric temperatures (Tsang, 1992;Mertens et al., 1992;Tully et al., 1989).It is also not expected to photolyze in the actinic region since its first UV absorption band is observed below 280 nm wavelengths (Brownsword et al., 1996;Dixon and Kirby, 1968;Rabalais et al., 1969).Nonetheless, HNCO has served as a benchmark system in understanding photodissociation decomposition pathways such as direct and indirect dissociation processes and remains an area of active research (Yu et al., 2013 and references therein).HNCO is most likely removed from the atmosphere by wet and/or dry deposition.HNCO's gas-to-liquid partitioning is therefore an important thermodynamic property that can be used to predict its atmospheric fate.Specifically, the Henry's Law coefficient K H for the solubility of HNCO represents the equilibrium ratio between its gas phase and aqueous phase concentrations at infinite dilutions according to Eq. (1) (Sander, 2015(Sander, , 1999)).The Henry's Law coefficient for HNCO has only recently been measured by Roberts and coworkers but their experimental set up was limited to a single pH measurement (Roberts et al., 2011).As HNCO is a weak acid with a pK a of 3.7, its Henry's Law coefficient is expected to have a large pH dependence as described in Eq. ( 2).Furthermore, the enthalpy of dissolution for HNCO is currently unknown.In lieu of measurements, modelling studies on HNCO have used formic acid's enthalpy of dissolution to model the temperature dependence of HNCO's Henry's Law coefficient (Barth et al., 2013;Young et al., 2012).In our present study, we measure the effective Henry's Law coefficient of HNCO at a range of pH and temperatures to determine its enthalpy of dissolution for the first time.
with water allows for an accurate understanding of the chemical fate of HN 5 atmosphere (Fig. 1).In this study, we therefore provide laboratory measurements 6 Henry's Law coefficient and enthalpy of dissolution as well as its three rates of hyd 7 their respective activation energies.8 9 Scheme 1: The three mechanisms involved in HNCO's hydrolysis.10 11 Figure 1: The fate of HNCO in the atmosphere includes its partitioning between 12 aqueous phases and its hydrolysis through three different mechanisms governed by 13 k3.14 2 Experimental Methods 15
Henry's Law coefficient experiments 16
To measure the effective Henry's Law coefficient H eff of HNCO, we use a bubb 17 experimental set up and detect HNCO through chemical ionization mass spectrome 18 Scheme 1.The three mechanisms involved in HNCO's hydrolysis.
HNCO reacts irreversibly with water in the aqueous phase, an unusual property for an atmospheric molecule.Once HNCO partitions to the aqueous phase, three mechanisms for its hydrolysis are possible.The first (Reaction R1) is acidcatalyzed and is therefore termolecular, whereas the second (Reaction R2) and third (Reaction R3) are bimolecular reactions involving either the protonated or deprotonated form of HNCO (Scheme 1) (Amell, 1956;Belson and Strachan, 1982;Jensen, 1958).In 1958, Jensen determined the hydrolysis rate of the three mechanisms through addition of AgNO 3 to buffered solutions at different time points to precipitate unreacted isocyanate as AgNCO, followed by back titration of excess AgNO 3 with NH 4 SCN.Considering the importance of these mechanisms in evaluating the fate of HNCO in the atmosphere, we follow up on the study by Jensen with our own experiments using ion chromatography to determine the pH and temperature dependencies of the overall rate of hydrolysis of HNCO.Quantitative knowledge of the ability of HNCO to partition to the aqueous phase and its subsequent reactions with water allows for an accurate understanding of the chemical fate of HNCO in the atmosphere (Fig. 1).In this study, we therefore provide laboratory measurements of HNCO's Henry's Law coefficient and enthalpy of dissolution as well as its three rates of hydrolysis and their respective activation energies.
Henry's Law coefficient experiments
To measure the effective Henry's Law coefficient K eff H of HNCO, we use a bubbler column experimental set up and detect HNCO through chemical ionization mass spectrometry.
Acetate reagent ion CIMS
The quadrupole chemical ionization mass spectrometer (CIMS) was built in house and is described in detail elsewhere (Escorcia et al., 2010).We opted to use acetate as the reagent ion which has been shown to be sensitive for the detection of acids (Roberts et al., 2010;Veres et al., 2008).For this experimental set up, the reagent ion was generated by flowing 20 sccm of nitrogen over a glass tube containing acetic anhydride (from Sigma-Aldrich and used as is) and maintained at 30 • C.This flow was subsequently mixed with a nitrogen dilution flow of 2 L min −1 and passed through a polonium-210 radioactive source to generate acetate ions.All flows were controlled using mass flow controllers.The data acquisition was done under selected ion mode where 10 m/z ratios were monitored with dwell times of 0. For the exception of m/z 42, none of the ions were observed to change during the experiments.The inlet flow of the CIMS is governed by a pin hole at 0.5 L min −1 , and a N 2 dilution flow of 0.4 L min −1 into the inlet was used to avoid depletion of the acetate reagent ion by high HNCO concentrations.Previous work suggests there is no significant role of water vapour in HNCO's detection by acetate CIMS (Roberts et al., 2010).With the CIMS's inlet dilution, the RH within the ion molecule region was < 20 %.
Experimental set-up for measurement of K H
To obtain the Henry's Law coefficient, K H , we monitored the decrease in gas phase HNCO exiting a buffered aqueous solution for a range of volume flow rates.A bubbler column experimental set up is used with online gas phase detection.This method is employed to measure HNCO's partitioning and take into account the concurrent hydrolysis of HNCO in the buffer solution at high time resolution.Our experimental setup is based on previous work (Kames and Schurath, 1995;Roberts, 2005;Roberts et al., 2011) and our apparatus is comprised of one fritted bubbler with an approx-imate volume of 70 mL which contained 15 mL of a citric acid/Na 2 HPO 4 buffer at varying pH.The 15 mL volume was chosen to reduce HNCO equilibration times and to simultaneously ensure that the bubbler's frit was submerged.Experiments performed in 30 mL of buffer yielded identical results.The water lost to the gas phase during the experiments (< 1 h) was at most 5 % of the original buffer volume and so no corrections to the latter were required.The bubbler was held in a temperature-controlled bath of approximately 1 : 1 mixture of deionized water and ethylene glycol.Upstream of the bubbler, where the RH was measured to be ∼ 50 %, was a valve and a tee connection where the dry HNCO flow could be connected and disconnected during the experiments.Downstream of the bubbler was another tee which connected to both the exhaust and the acetate reagent ion CIMS.Conveniently, the absolute concentration of gas-phase HNCO is not required in this approach since it relies on the decay of the signal, [HNCO] t / [HNCO] 0 and not on the absolute gas phase and aqueous phase concentrations.
HNCO was produced using a permeation source which sublimes solid cyanuric acid at 250 • C in a flow of dry nitrogen and is described in detail elsewhere (Borduas et al., 2015).This source is based on HNCO sublimation techniques and on a similar source previously developed by Roberts et al. (Belson and Strachan, 1982;Roberts et al., 2010).The buffer solutions were made with solid citric acid, disodium phosphate and deionized water with citric acid concentrations ranging from 0.02 to 0.0035 M to access a pH range of 2.5-4.0.
Each experiment began with gaseous HNCO flowing through a fresh buffer solution until a reasonably stable signal (> 0.01 ncps) was obtained by the CIMS (background counts ∼ 5 × 10 −4 ncps).The solution did not need to reach equilibrium for the experiment to proceed and so lower temperatures and higher pHs (when the equilibration time is longest and may reach over 4-5 h) were feasible.Once a normalized signal (i.e.relative to the reagent ion signal) of at least 0.025 for HNCO was obtained, the flow of HNCO through the bubbler was turned off, and only pure nitrogen continued to flow through.The HNCO signal then decayed exponentially as a function of time due to partitioning as well as hydrolysis.This decay was monitored until it had decreased to less than one quarter of the original signal.This method also has the advantage of extracting an effective Henry's Law coefficient without needing to monitor the aqueous phase HNCO concentration.
Hydrolysis rate experiments
HNCO in the aqueous phase was measured using ion chromatography at different pH and temperatures to determine its rates of hydrolysis.
Ion chromatography
The measurements for the hydrolysis of HNCO were made using a Dionex IC-2000 Ion Chromatography (IC) System.An IonPac (AS19) anion column consisting of a quaternary ammonium ion stationary phase with diameter and length dimensions of 4 and 25 mm respectively was employed.Sample runs used a concentration gradient of the eluent KOH ranging from 2 to 20 mM.An optimized elution program was written for each pH range measured (between 25-60 min for each injection).Samples were injected using a Dionex (AS40) automated sampler into a 25 µL loop for preinjection.The use of a loop rather than a concentrator was important and ensured that the total HNCO / NCO − concentrations were being measured.The IC was calibrated using matrix-matched standards of known HNCO / NCO − concentrations prepared from serial dilutions of KOCN (Sigma-Aldrich, 96 % purity).
Hydrolysis kinetics experiments
The kinetics of the hydrolysis reactions in the pH range of 1-2 are very fast; complete decays occurred in a matter of minutes.The decay of HNCO at these low pH values is therefore too quick for the 25 min IC method to capture.To circumvent this issue we used a quenching method.Specifically, we prepared an aqueous solution of 50 mL of sulphuric acid at the desired pH. 5 mL of this acidic solution was subsequently added to a 0.02 M solution of KOCN in eight different falcon tubes to initiate the rapid hydrolysis reaction.Each reaction was then quenched at different times by a 0.1 M aqueous solution of KOH.Increasing the pH to more than 10 slowed the hydrolysis kinetics by orders of magnitude and allowed for subsequent IC measurements.Replacing sulphuric acid by nitric acid and/or KOH by NaOH yielded identical hydrolysis rates and ensured the results were reproducible with different acids and bases.
Buffer solutions in the pH range of 3-5 were prepared by using appropriate molar ratios of citric acid and disodium phosphate whereas buffer solutions in the pH range of 9-10 used sodium carbonate and sodium bicarbonate.All buffer concentrations were < 0.002 M, and we assume that the ionic strength of these solutions had minimal impact on the solubility of HNCO.For the room temperature set of kinetic experiments, the experiment was initiated by the addition of 0.1 g of KOCN to 50 mL of the desired buffer solution.The solution was further diluted by a factor of 500 and then split into eight samples for analysis at succeeding intervals on the anion IC.
Hydrolysis reactions were run at different temperatures to assess the activation energies of each of the three hydrolysis mechanisms.Room temperature reactions were conducted inside the IC autosampler AS40 (with a cover) and monitored by a temperature button (iButtons, Maxim Integrated, San Jose, CA with 0.5 • C resolution).Colder temperature re- actions were done in a water ice bath and monitored by a thermometer.Finally, warmer temperature reactions for high pH samples were run in a temperature-controlled water bath.These reactions took days to weeks to reach completion, and so 5 mL samples from the reaction mixtures were taken out of the water bath and measured on the IC at appropriate time intervals.
Henry's Law coefficient K H
HNCO's effective Henry's Law solubility coefficient K eff H expressed in M atm −1 was determined based on the exponential decay of gaseous HNCO exiting a bubbler containing a buffered solution.The observed decay of HNCO is caused by its partitioning from the aqueous phase to the gas phase as well as its competing hydrolysis reaction.Equation (3) represents the rate law for the disappearance of HNCO during the experiment and Eq. ( 4) is the integrated rate law.
where [HNCO] t is the HNCO concentration at time t, [HNCO] 0 is the initial HNCO concentration (at time t = 0), [HNCO] t / [HNCO] 0 is the HNCO concentration in the gas phase downstream of the bubbler measured by the CIMS, φ is the volumetric flow rate (cm 3 s −1 ), K eff H is the effective Henry's Law coefficient for solubility (mol L −1 atm −1 ), V is the liquid volume of the buffer (cm 3 ), R is the ideal gas constant (8.21 × 10 −2 L atm mol −1 K −1 ), T is the temperature (K), k hyd is HNCO's overall rate of hydrolysis (s −1 ) and t is the time (s).To extract the value of K eff H from the experimental decay curves, we first plot the natural logarithm of change These dynamic experiments were repeated with a range of buffer solutions ranging from pH 2.5 to 4.0 to determine the pH-independent Henry's Law coefficient, K H , of HNCO.Experiments at temperatures of 273-298 K were also conducted to determine HNCO's enthalpy of dissolution, H diss .
H
The pH dependence of the effective Henry's Law coefficient K eff H of a weak acid like HNCO depends on its pK a as well as on the pH according to Eq. ( 2).Throughout our experiments, we measure the value of K eff H and employ Eq. (2) to plot K eff H as a function of the inverse of the proton concentration, [H + ], and thus to extract HNCO's Henry's Law coefficient for solubility, K H . Figure 3a depicts this linear relationship and yields a value of 26 ± 2 M atm −1 for K H .Our K H value compares well with the only other published value of 21 M atm −1 determined solely at pH 3 (Roberts et al., 2011).Figure 3b on the other hand shows experimentally determined K eff H at different pH values and at a constant temperature of 298.0 ± 0.2 K. Error bars in both Fig. 3a and b represent the percentage of the standard deviation of the slope as in Fig. 2b.The slope in Fig. 3a also allows us to determine HNCO's acid dissociation constant, K a , which at 298 K is 2.1 ± 0.2 × 10 −4 M. Our K a value also agrees well with previously reported K a for HNCO (Amell, 1956;Belson and Strachan, 1982).
Temperature dependence
The temperature dependence of HNCO's solubility was established by running experiments at varying temperatures from 273 to 298 K. Since K eff H is very sensitive to pH changes, all experiments were conducted with a buffer solution from the same batch and same volumetric flask within a few days.Plotting the natural logarithm of the effective Henry's Law coefficient as a function of the inverse of temperature yields the ratio of the enthalpy of dissolution, H diss , to the gas constant, R (Fig. 4).We report a value of −34 ± 2 kJ mol −1 for HNCO's enthalpy of dissolution, where the uncertainty stems from the deviation from the slope depicted in Fig. 4.This value compares to similar weak acids like HONO (−40 kJ mol −1 ) and HCN (−42 kJ mol −1 ), but differs from the value of formic acid (−47 kJ mol −1 ) which was the value assumed for HNCO in the Young et al., and the Barth et al., modelling studies (Barth et al., 2013;Sander, 2015;Young et al., 2012).
Rate of hydrolysis k hyd
There are three mechanisms by which HNCO can react with water described in Scheme 1 Reaction (R1) to (R3) and depicted in Fig. 1.The disappearance of HNCO in the aqueous phase can therefore be described by the rate law shown as Eq. ( 5).The pH dependence of HNCO's hydrolysis manifests itself in the first term of Eq. ( 5) as the hydrogen ion concentration as well as in the concentration of the dissociated and/or non-dissociated acid in each term.
To mathematically integrate this rate law, the concentration of HNCO needs to be expressed as the sum of undis- sociated HNCO and of isocyanate ion NCO − in solution, which is denoted in Eq. ( 6) as [HNCO] tot .HNCO's acid dissociation constant K a relates the concentration of HNCO and NCO − as shown in Eq. ( 6).The K a -dependant expression of Eq. ( 6) is then substituted into the rate law of Eq. ( 5), and subsequently integrated.The K a value of HNCO has a slight temperature dependence with a heat of dissociation previously measured to be 5.4 kJ mol −1 , which for the temperature range of 273 to 298 K represents a 25 % change (Amell, 1956).We therefore use Amell's heat of dissociation value throughout our analysis to account for K a 's temperature dependence in the van't Hoff equation.Furthermore, Belson et al.'s evaluation of the K a of HNCO literature recommends 2.0 × 10 −4 M at 298 K (Belson and Strachan, 1982).Finally, our own work on the pH dependence of Henry's Law coefficient of HNCO, suggests a K a value of 2.1 ± 0.2 × 10 −4 M at 298 K, consistent with the recommended value (Fig. 3a).
[HNCO] =[HNCO] tot − NCO − = By integrating Eq. ( 5) with the appropriate substitutions, the resulting expression is Eq. ( 7), where k hyd represents the observed first-order rate loss of hydrolysis of HNCO and depends on the individual reaction rates k 1 , k 2 and k 3 according to Eq. ( 8).
[HNCO] t [HNCO] 0 = e −k hyd t (7) The aim of our hydrolysis experiments is to measure k hyd at different pH values to subsequently solve for the values of the individual hydrolysis rate coefficients k 1 , k 2 and k 3 .To measure k hyd , we employ ion chromatography (IC) which allows for quantitative measurement of the total isocyanic acid in solution as NCO − using an anion chromatography column.The key to making [HNCO] tot measurements was to use a loop injection port for the IC instead of a concentrator column, since the latter retains only ions and would not measure any protonated HNCO in solution.Appropriate buffer solutions were made to conduct experiments over a range of pH values from 1.7 to 10.4.The decay of [HNCO] tot was monitored by IC over time and plotting the natural logarithm of the decay as a function of time as in Fig. 5 yields the k hyd specific to that temperature and pH.Hydrolysis experiments are listed in Table A1 in Appendix A.
Determining k 1 and k 2
At a pH below 3, the third hydrolysis mechanism (Scheme 1 (Reaction R3)) will contribute minimally to the overall k hyd .Indeed, the third term in Eq. ( 8), k 3 K a /(K a + [H + ]) will become very small because [H + ] K a .Furthermore, very little of the HNCO is present as NCO − at low pH.This assumption (which we verify retroactively) simplifies the k hyd expression to Eq. ( 9) with only two unknowns, k 1 and k 2 .We can now solve for k 1 and k 2 from two k hyd values derived from experiments conducted at two different pH values but at the same temperature.For example, solving for k 1 and k 2 at 295 K using the k hyd in Table A1, we obtain a value of (6.73 ± 0.27) × 10 −2 M s −1 for k 1 and of (1.04 ± 0.04) × 10 −3 s −1 for k 2 .We do this calculation once per temperature.The uncertainties associated with these measurements come from the slope of decay of aqueous phase HNCO measured by IC.
Temperature dependence of k 1 and k 2
Hydrolysis experiments of HNCO at three different temperatures further enables us to solve for the temperature dependence of k 1 and k 2 .We chose three temperatures relevant to tropospheric air masses: 270, 283 and 295 K. Figure 6 represents the slope of the natural logarithm of the rate coefficient of hydrolysis as a function of the inverse of the temperature which according to the Arrhenius equation shown in Eq. ( 10) yields the activation energy specific to each hydrolysis mechanism.We obtain activation energies of 50 ± 2 kJ mol −1 and 56 ± 4 kJ mol −1 for k 1 and k 2 respectively.Furthermore, the y-intercept of these linear plots yields the value of ln(A) in Eq. ( 10) and so the A factors of each hydrolysis mechanism can also be obtained, providing Arrhenius expressions of k 1 = (4.4± 0.2) × 10 7 exp(−6000 ± 240/T ) M s −1 and k 2 = (8.9± 0.9) × 10 6 exp(−6770 ± 450/T ) s −1 .The uncertainties stem from the fit to the data points in Fig. 6 (and their error bars come from the slope of the decay of aqueous phase HNCO measured by IC).
Determining k 3 and its temperature dependence
At high pH levels, the third hydrolysis mechanism (Scheme 1 (Reaction R3)) will dominate the observed k hyd , however, the first two mechanisms may still have a non-negligible contribution to k hyd and can therefore not be disregarded.We can solve for k 3 , knowing k 1 and k 2 and their respective temperature dependencies, using Eq. ( 8).The k hyd values measured at pH above 9 and at 40 • C are used (Table A1), and k 3 is determined for each pH.The average of our three measurements at 40 • C is (5.77 ± 0.35) × 10 −7 s −1 .The temperature dependence of k 3 is determined in an analogous way to k 1 and k 2 and is also depicted in Fig. 6.We obtain a value of 91 ± 12 kJ mol −1 which translates to an Arrhenius expression of k 3 = (7.2± 1.5) × 10 8 exp(−10 900 ± 1400/T ) s −1 .
Equipped with the values of k 1 , k 2 and k 3 and their temperature dependencies, a map of the expected total hydrolysis rate, k hyd , as a function of temperature and pH can be generated using Eqs.( 8) and (10) and is plotted as Fig. 7.For reference, the colour scale of Fig. 7 also reads in hydrolysis lifetime of HNCO in hours.It is clear that HNCO's lifetime in the aqueous phase has a large temperature and pH dependence.
Comparing the rate of hydrolysis k hyd through different methods
The individual rate coefficients of the three hydrolysis mechanisms (Scheme 1, Reactions R1 to R3) have only been evaluated one other time in the literature (Jensen 1958).Our IC experimental method differs substantially from Jensen's back titration method, and yet we obtain similar values for k 1 , k 2 and k 3 as well as for their respective activation energies.The values are summarized in Table 1.Again, the colour scale of Fig. 7 is generated from Eq. ( 8) using our obtained values for k 1 , k 2 and k 3 and for E a1 , E a2 and E a3 , and we superimpose all our k hyd measurements from Table A1 as circles.We further add Jensen's published raw data for comparison (Jensen, 1958) as triangles.The agreement is good and is consistently within the same order of magnitude (Fig. 7).
In addition, our Henry's Law coefficient experiment provides a complimentary way to determine k hyd at different temperatures and pH values.Indeed, the intercept of the line which fit the data of dln(C t /C 0 )/ dt vs. ϕ/V yields k hyd , representing the value for the loss process in the solution of the bubbler column experiment (an example is given in Fig. 2b).We show these values as squares in Fig. 7. Roberts et al., also determined k hyd through this method at pH 3 and at 25 • C and this value is appended to Fig. 7 (Roberts et al., 2011) as a diamond.The agreement is good from all four cases.We can conclude that the lifetime of HNCO against hydrolysis in dilute aqueous solutions spans seconds to years depending on pH and temperature.The lifetime of HNCO against hydrolysis in cloud water of pH 3-6 will be shorter and range from 10 h to ∼ 20 days in the troposphere.On the other hand, HNCO's hydrolysis in ocean waters of pH ∼ 8.1 and temperatures below 30 • C will be very slow, translating to a lifetime of 1-2 years if we assume no other reactive chemistry is taking place.Finally, in the context of exposure, if HNCO is present in human blood at physiological pH and temperature, its lifetime to hydrolysis will be as high as several months.On the other hand, if HNCO is present in the stomach, which is more acidic, we would expect its lifetime to drop to minutes or hours.
Atmospheric implications
HNCO is a toxic molecule and can cause cardiovascular and cataract problems through protein carbamylation (Beswick and Harding, 1984;Mydel et al., 2010;Wang et al., 2007).Recently reported ambient measurements of HNCO in North America raise concerns of exposure particularly from biomass burning, diesel and gasoline exhaust and urban environments (Brady et al., 2014;Roberts et al., 2011Roberts et al., , 2014;;Wentzell et al., 2013;Woodward-Massey et al., 2014;Zhao et al., 2014).With the values for HNCO's Henry's Law coefficient and hydrolysis rates reported here, a better understanding of HNCO's removal rate from the atmosphere can be determined, and hence HNCO's atmospheric lifetime can be estimated.Note however that our HNCO lifetime estimates do not consider dry deposition and therefore represent a higher limit, particularly since Young et al., found that dry deposition can be significant for HNCO (Young et al., 2012).Specifically, the lifetime of HNCO in the atmosphere will depend on its partitioning to the aqueous phase K eff H , the temperature T , the pH and liquid water content (LWC) of the aerosol and/or droplet and finally the hydrolysis of HNCO k hyd once in solution.We can calculate HNCO's lifetime against hydrolysis based on Eq. ( 11), where τ is the lifetime in seconds, L is the fraction of air volume occupied by liquid water (dimensionless) and R is the gas constant.11) with different fixed variables.Figure 8a holds the LWC to 1 g m −3 , a value representative of cloud water, highlighting the dependence of HNCO's lifetime on temperature and pH (Ip et al., 2009).At atmospherically relevant pH of 2 to 6 and at temperatures below 30 • C, HNCO has a lifetime on the order of 10 days to hundreds of years.Alternatively, Fig. 8b holds the pH at 4 and varies the LWC on the x axis.Water concentrations relevant to wet aerosol (1-100 µg m −3 ) are too small to act as a significant sink for gas phase HNCO.However, Fig. 8b highlights the strong dependence of HNCO lifetime on LWC in clouds, again ranging from days to hundreds of years.It therefore appears that if HNCO is incorporated into cloud water, it is more likely to be rained out or revolatilized than to hydrolyze given typical times in clouds of minutes to hours.There is also the possibility that HNCO has other currently unknown sinks in cloud water that may be competitive with its hydrolysis and further work on HNCO's aqueous phase chemistry with nucleophiles such as amines and alcohols is currently underway in our laboratories.Finally, HNCO will partition readily in oceans at pH ∼ 8, but will take years to hydrolyze.τ = 1/K eff H RT Lk hyd .
(11) Zhao et al. (2014) observed higher concentrations of HNCO in the cloud water in La Jolla, California than predicted by its Henry's Law coefficient at 298 K (Zhao et al., 2014).This observation remains puzzling but may point towards sources of HNCO within cloud water other than simple partitioning chemistry.The Barth et al. (2013) modelling study concluded that fog, low-level stratus clouds or stratocumulus clouds were the most efficient cloud conditions at removing HNCO from the gas phase, particularly in polluted scenarios where the cloud water was more acidic.The authors highlighted the high dependence of HNCO's fate on liquid water pH and temperature, consistent with our findings (Barth et al., 2013).The Young et al., 2012 study, which modelled global HNCO budgets, assumed the aqueous loss of the weak acid occurred only when the cloud liquid water content was greater than 1 mg m −3 .Based on Fig. 8b, 1 mg m −3 is low for HNCO to significantly partition into the aqueous phase and rather requires water mass concentrations 1000 times greater for HNCO's lifetime to drop to days.The model may have overestimated the ability for LWC to act as a sink for HNCO.HNCO may be a longer lived species than previously thought and exposure of this toxic molecule may pose a threat to regions with HNCO point sources like biomass burning and engine exhaust, as pointed out by Young et al. (2012) and Barth et al. (2013).
Figure 1 .
Figure 1.The fate of HNCO in the atmosphere includes its partitioning between the gas and aqueous phases and its hydrolysis through three different mechanisms governed by k 1 , k 2 , and k 3 .
Figure 2 .
Figure 2. (a) The concentration decay curves as a function of time according to Eq. (4) for each flow rate shown; (b) the slopes of each fit in (a) plotted as a function of the ratio of the flow rate to the volume.The symbols in both figures represent the same flow rate shown.
Figure 3 .
Figure 3. (a) The fit according to Eq. (2) of the experimental K eff H values which allows for the determination of K H and K a at 298 K. (b) The experimental K eff H values as a function of pH at 298 K.The black line is the modelled dependence of K eff H according to Eq. (2) based on the determined value of K H and a value for K a of 2.1 × 10 −4 M. The inset shows the range of K eff H across the full range of pH.
Figure 4 .
Figure 4.The temperature dependence of experimentally measured K eff H at pH 3.08.
Figure 5 .
Figure 5. Example of a hydrolysis experiment at pH 5.4 and at 25 • C where the [HNCO] tot is measured by loop injections on the IC.
Figure 6 .
Figure 6.The linear plots of the natural logarithm of each hydrolysis rate coefficient k 1 , k 2 and k 3 as a function of the inverse of temperature to yield the activation energies of each mechanism.
Figure 7 .
Figure 7. k hyd as function of temperature and pH generated from Eq. (8) using our obtained values for k 1 , k 2 and k 3 and for E a1 , E a2 and E a3 .All available k hyd measurements for HNCO in the literature and from this work are superimposed and colour coded appropriately.As a guide, the colour scale also represents the lifetime in hours for HNCO in dilute aqueous solutions.
Figure 8a and b depict outputs of Eq. (
Figure 8 .
Figure 8.(a) The lifetime of HNCO in days as a function of temperature and pH at 1 g m −3 of LWC and (b) the lifetime of HNCO in days as a function of temperature and LWC at pH 4.
|
v3-fos-license
|
2020-10-30T08:05:19.996Z
|
2020-01-01T00:00:00.000
|
225607534
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1479973120948077",
"pdf_hash": "2f497ac7d26a6a0ef08fbbf0b83226aaa0f9e037",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:704",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "dbcd96f5e8d50d6cf00591f0115bbb965a35b84a",
"year": 2020
}
|
pes2o/s2orc
|
Randomised controlled trial of the effect, cost and acceptability of a bronchiectasis self-management intervention
Background: Patient self-management plans (PSMP) are advised for bronchiectasis but their efficacy is not established. We aimed to determine whether, in people with bronchiectasis, the use of our bronchiectasis PSMP – Bronchiectasis Empowerment Tool (BET), compared to standard care, would improve self-efficacy. Methods: In a multi-centre mixed-methods randomised controlled parallel study, 220 patients with bronchiectasis were randomised to receive standard care with or without the addition of our BET plus education sessions explaining its use. BET comprised an action plan, indicating when to seek medical help based on pictorial represented indications for antibiotic therapy, and four educational support sections. At baseline and after 12 months, patients completed the Self-Efficacy to Manage Chronic Disease Scale (SEMCD), St George’s Respiratory Questionnaire (SGRQ), EQ-5D-3 L (to calculate Quality Adjusted Life Years (QALYs) and cost questionnaires. Qualitative data were obtained by focus groups. Results: The recruitment to the study was high (63% of eligible patients agreeing to participate) however completion rate was low (57%). BET had no effect on SEMCD (mean difference (0.14 (95% confidence interval (95%CI) −0.37 to 0.64), p = 0.59) or SGRQ, exacerbation rates, overall cost to the NHS or QALYs. Most had developed their own techniques for monitoring their condition and they did not find BET useful as it was difficult to complete. Participant knowledge was good in both groups. Conclusion: The demand for patient support in bronchiectasis was high suggesting a clinical need. However, the BET did not improve self-efficacy, health related quality of life, costs or clinically relevant outcome measures. BET needs to be modified to be less onerous for users and implemented within a wider package of care. Further studies, particularly those evaluating people newly diagnosed with bronchiectasis, are required and should allow for 50% withdrawal rate or utilise less burdensome outcome measures. Clinical trials registration: ISRCTN ISRCTN 18400127. Registered 24 June 2015. Retrospectively Registered
Keywords
Bronchiectasis, mixed-methods, patient self-management plans, self-efficacy to manage chronic disease scale, ST George's respiratory questionnaire Date received: 28 February 2020; accepted: 16 July 2020 Background Bronchiectasis, a chronic lung disease characterised by chronic purulent sputum production, breathlessness and cough, is managed with airway clearance techniques, airway pharmacotherapy and appropriate use of antibiotics, along with patient education and disease monitoring. 1 People with bronchiectasis often have impaired health related quality of life (HRQOL) 2 ; and can experience repeated exacerbations due to lung infection resulting in deterioration in symptoms and increased hospital bed days and cost. 3 Living with bronchiectasis results in considerable burden for patients, therefore methods of improving patient centred care are required to improve patient empowerment. 4 Patient Self-Management Plans (PSMP) aim to do this and have been shown to improve health outcomes for adults with asthma 5 and to be cost-effective. 6 Indeed the recent European Multicentre Bronchiectasis Audit and Research Collaboration (EMBARC) consensus statement about research prorities highlighted the need for studies to determine the effectiveness of PSMP in bronchiectasis. 7 A recently published systematic review concluded that there was insufficient evidence to determine whether self-management interventions are beneficial for people with bronchiectasis. 8 We developed a self-management intervention for bronchiectasis (the Bronchiectasis Empowerment Tool (BET)) which was based on British Thoracic Society Guidelines, patient consultation and available literature on the patient perspective and needs for bronchiectasis self-management. 9 It contained a 1 page action plan (which advises on actions depending on different circumstances) consisting of 3 action points, as is recommended, 10 embedded in a document with written information and was supported by one to one education.
The study aimed to test whether, in people with bronchiectasis, the use of BET, compared to standard care, would improve self-efficacy using the Self-Efficacy to Manage Chronic Disease Scale (SEMCD), 11 as this is a fundamental aspect of self-managment. 12 Secondary aims were to assess the effect of BET on HRQOL and disease-related knowledge and to determine whether it was cost effective. We also aimed to explore the participants' acceptability of BET.
Methods Design
This was a multi-centre parallel randomised controlled mixed-methods parallel study of BET in people with bronchiectasis over a 12 month period. Participants from six hospitals (one bronchiectasis specialist centre, four local hospitals with specialist respiratory nursing support and one community hospital) in East Anglia, UK were recruited from May 2013 to April 2015. The study was conducted in accordance with Good Clinical Practice and all participants gave written informed consent. It had ethical approval (13/SC/0140) and was registered on a trials database (ISRCTN 18400127).
Participants
Patients, of either gender, were included if they were older than 18 years, had a diagnosis of bronchiectasis confirmed on high resolution computed tomography (HRCT) and at least one exacerbation within the previous 12 months requiring treatment with antibiotics. Patients with cystic fibrosis or traction bronchiectasis, severe or uncontrolled co-morbid disease, impairment in cognitive functioning or did not speak English language were excluded. Patients currently using a written patient self-management plan or involved in the design of BET were also excluded.
Randomisation
Eligible participants were randomised to the intervention or control groups, after completion of the baseline assessments, on a 1:1 basis using a computer generated code created by the study statistician with stratification according to hospital centre and severity of disease (four or more exacerbations in the last 12 months versus less than four) with code concealment in sequential opaque envelopes. Treatment allocation was undertaken by an unblinded researcher. All eligible participants received the contemporaneous British Lung Foundation Bronchiectasis Patient Information Sheet and Bronchiectasis Patient Information Leaflet from the British Thoracic Society/Association of Chartered Physiotherapists Respiratory Care Guidelines. 13 Intervention Participants randomised to the intervention group received the BET document plus education sessions about its use. BET is a 48 page A5 booklet and comprises an action plan, four educational support sections each with notepads to assist in keeping track of their health, and links to on-line resources. The action plan is based on the indications for antibiotic therapy from the BTS bronchiectasis guidelines (sputum purulence, sputum volume and cough/wheeze/ breathlessness) and pictorially represents easily recognisable health changes indicating when to seek medical help, to minimise barriers of health literacy. The educational support sections comprise information about general health, sputum clearance techniques and medication. There is a section for recording each course of antibiotic and date of sputum microbiology.
An un-blinded researcher (CB), previously a respiratory nurse, provided education about BET via four brief telephone conversations (lasting on average 10, 7, 5 and 2 minutes) delivered on consecutive days at the beginning of the study; these covered the use of the action plan and the information, monitoring and reference sections. Participants were given the opportunity to ask questions and to practice using the tool. Patients were provided with a contact number for information about the study and use of BET (but not for clinical queries). Participants' healthcare providers were provided with brief information about BET in a letter.
Control
Participants within the control group received standard care whereby patients attended routine appointment and were guided on their management according to current practice as per the BTS bronchiectasis guidelines.
Measurements
Patients received the six item SEMCD to assess selfefficacy as it is a valid, responsive tool with high internal consistency in chronic disease, ranging between 1 and 10 with 10 scoring total confidence in managing disease 11 and used to evaluate selfmanagement programmes 14 ; the St George's Respiratory Questionnaire (SGRQ) 15 to assess disease HRQOL as it has been validated for use in bronchiectasis 16 ; the EuroQol-5D 3 level version (EQ-5D-3 L) 17 to assess HRQOL; and cost questionnaires at baseline and every 3 months by post in a reply paid envelope. The Lung Information Needs Questionnaire (LINQ), 18 which assesses knowledge and behaviour is validated in patients with chronic obstructive pulmonary disease but is easily transferable to bronchiectasis was completed at baseline and after 12 months. As no appropriate validated questionnaire existed which addressed the participants' knowledge and confidence about bronchiectasis a new questionnaire was created in consultation with the research team and lay advisors was completed after12 months by participants. Patients who failed to return the questionnaires were sent a reminder questionnaire by post. The number of exacerbations of bronchiectasis, 19 medical contacts and sputum microbiology requests were obtained from cost questionnaires and hospital records.
Two focus groups, comprising 4 participants each, purposively sampled to include patients with mild and severe disease from the intervention group, were facilitated by CB under supervision of AS (qualitative research expert) using a semi-structured interviewing technique, to explore participants' perceptions of BET.
Analysis
The primary outcome was the change from baseline in SEMCD. A sample size of 154 patients has 80% power to detect a treatment difference (two sided 5% significance) of 1 unit (10% of maximum score) of the SEMCD with a standard deviation of 2.2 units. 20 We expected a withdrawal of 30% based study in chronic obstructive pulmonary disease with similar questionnaire burden, 21 and therefore 220 patients were entered into the study. All data were double entered and discrepancies resolved by re-examining the source data. LINQ was analysed using the LINQ Scoring Tool (www.linq.org.uk). The Bronchiectasis Aetiology Comorbidity Index was calculated from clinical data. 22 The analysis was based on an intention-to-treat approach. Change from baseline for primary and secondary endpoints was compared between groups using a general linear model adjusted for baseline severity. Total exacerbations and unscheduled care were both compared using negative binomial regression and reported as the incidence rate ratio which is the ratio of the event rates between the study arms. Adjusted analyses were conducted by additionally including the baseline value in the model as a covariate, e.g. for the SEMCD outcome we adjusted for the baseline measure of SEMCD. Data are presented as mean and standard deviation. The analysis was undertaken using Stata 16.1/SE.
Recordings of the focus groups were transcribed and a review of the data generated initial codes. Data from the focus groups were analysed in parallel to increase rigour. 23 We used Microsoft Office Excel and computer assisted qualitative data analysis software (Nvivo11) to perform an inductive thematic analysis where patterns and clusters of linked data were organised into themes. 24,25 In the results section we show selected quotes to illustrate the participants' experience of using BET.
Economic evaluation
Costs were estimated from the perspective of the NHS. The intervention costs were for a specialist nurse to arrange and conduct telephone education sessions, who would require 2 hour 1:1 training, and BET booklet printing. In the cost questionnaires, participants reported both hospital and community health visits. Unit costs were assigned to all items of resource use (£GBP ($USD) for the 2014-15 financial year). 26,27 Responses to the EQ-5D-3 L were converted into utility scores 28 using the UK York A1 tariff. 29 Quality Adjusted Life Year (QALY) scores were subsequently calculated using the area under the curve approach. 30 Multiple imputation was performed to account for missing cost and outcome data. 31 Regression analysis 32 was subsequently used to estimate the mean incremental cost (mean difference in cost) and effect (QALY gain) between the two groups and the incremental cost-effectiveness ratio (ICER). 33 The cost effectiveness acceptability curve (CEAC), which estimates the probability of the intervention being cost-effective, 34 was estimated at a value of £20,000 ($26,400) per QALY.
Results
The intention-to-treat analysis included 220 randomised patients, of which 155 (69%) were female, which represented 63.2% of eligible individuals ( Figure 1). They had a mean (standard deviation) age of 66.9 (12.0) years, FEV1 1.84 (0.69) L, SEMCD 7.02 (2.0), total SGRQ 42.4 (19.1) and a median (inter quartile range) time from diagnosis of 5 years. [2][3][4][5][6][7][8][9][10][11][12][13][14] The two groups were well balanced at baseline and hence no adjustment to the analysis was required to account for baseline factors ( Table 1). The withdrawal rate was higher than expected with only 127 individuals (57%) returning the primary outcome questionnaire at 12 months. There was no difference in the change in SEMCD between the two study arms. The data were very slightly negatively skewed, but re-analysis using the bootstrap with 1,000 iterations gave similar results particularly for the adjusted analysis (unadjusted p ¼ 0.96, adjusted p ¼ 0.60) so that the results are not sensitive to the violation of the assumptions of the t-test. There were no significant differences between intervention and control for change in SGRQ, exacerbation rate, LINQ score or sputum microbiology requests (Table 2). In addition there were no differences between the intervention and control at any of the 3 month time points for any of the variables. Both groups were confident in managing their condition at the end of the study (Table 3).
Within the focus groups three participants out of eight had fully utilised the BET tool. Seven out of eight, felt the need for support with bronchiectasis, but not necessarily in the form of BET. Most participants of the focus group had already developed their own techniques for monitoring their condition. One of them said 'A lot of the things in there I already knew, but not everybody would, particularly the newly diagnosed wouldn't'. Another one said that . . . what I would do is make it slightly simpler, I felt that sometimes I was repeating things. When you are filling it in, you are not well at the time and that makes it more difficult. I think that if someone could have reviewed my progress with me and guided me it might have been even more successful. 1105 However, those that did use BET reported having gained a clearer and better insight into the presentation and duration of their symptoms. The aspect that was mentioned most was the improved interaction and communication with healthcare professionals and secondly the self-care behaviours e.g. sputum testing and airway clearance. Emerging themes ranged from impact of the disease on social interactions; embarrassment, change of role and isolation, to the challenges of taking antibiotics influenced by side-effects, media messages and the complexities of intravenous self-administration (see appendix). An overarching theme was the need for informed guidance and support illustrated by the following extracts From a personal basis not being able to pick up a phone and say to somebody do you think that it is alright? Do you think that I can do something to improve things? If you know someone who knows a lot about it that would be wonderful. A nurse to talk to. 1044 It was nice as I mentioned earlier to speak to a GP who was knowledgeable and knew exactly what I was
Discussion
We did not show that the use of BET had a beneficial effect in terms of self-efficacy, HRQOL or clinically relevant disease outcome measures such as exacerbations or hospitalisations or costs. The uptake into the study was high reflecting patients desire to be involved with and assist initiatives to increase their education and support for their condition. However, participants did not find the self-management tool to be valuable as, although the action plan was brief, overall BET was too onerous to complete and few participants used it. The participants did not feel more informed about their condition and there was no change in their behaviour. None of the participants were newly diagnosed and many had developed their own techniques to monitor and manage their disease. This was despite the involvement of patients with bronchiectasis in the development of BET although they were possibly self-selected in terms of their enthusiasm for the intervention.
Unfortunately the patient withdrawal was higher than we expected and therefore our study was underpowered. This may be due to the lack of study visits, and face-to-face contact with researchers, or to the burden of literacy represented by the intervention and patient reported outcome and cost measures. The low intensity nature of the study visits but relatively high questionnaire burden may have resulted in disengagement with the study. Also the BET tool was not evaluated within a larger process of care and it could not be modified by the clinical team or patient. It is likely that if the healthcare professionals involved had been regularly reviewing and updating the action plan or educational material or notepads contained within BET, it would have been used more. Although the separate elements of a care bundle need to be individually assessed, 35 action plans are more effective if integrated within healthcare 36 ; and lack of review of asthma self-management plans by healthcare professionals leads to lack of interest by patients. 37 The action plan in BET was accompanied by brief written and one-to-one patient education as we envisaged that would be the case in clinical practice. This was delivered by phone as this was more convenient, permitted standardised training throughout a multicentre study and was preferred by the patients. Many people in the focus groups liked the telephone education and indeed structured telephone support has been shown to be beneficial for people with chronic heart failure. 38 However, a more intensive programme or one integrated within the practice and championed by healthcare providers may have had greater uptake and benefit. 39 We did not include training on skills such as problem solving, decision-making, goal setting and emotional management. Diabetes standards suggest greater than 10 hours of support are required for implementation of self-management plans. 40 We had broad inclusion criteria for this study, only requiring documented evidence of diagnosis and one exacerbation in the previous year, to maximise generalizability. However our participants had less impaired HRQOL compared to other trials 41 (but similar to observational studies 16 ) and the majority of individuals felt confident about bronchiectasis in both groups at the end of the study. It is possible that the reason for lack of detectable benefit is that the patients had relatively mild disease of long duration (average more than a decade) and had already developed mechanisms for managing their disease so did not benefit from this alternative tool. Indeed, it was suggested in the focus groups that individuals with newly diagnosed disease would find the tool more beneficial but we did not purposively sample those with a good response for the focus groups
Conclusion
We have shown that BET did not improve outcomes. Many participants had mild disease, already developed self-management techniques and/or considered themselves confident with their condition. The telephone education was appreciated by participants and could be utilised to a greater extent in the future. BET should not be used as it stands but a simplified version should be evaluated in newly diagnosed patients, probably in the context of a wider care package with more intensive support. Recruitment into the study was high suggesting a clinical need but future studies should allow for up to 50% withdrawal rate or utilise less burdensome outcome measures, perhaps capturing patients ability to communicate with healthcare professionals or bronchiectasis specific HRQOL. There was no difference in the QALY score between the two groups. n¼Number for whom data were available; SD¼standard deviation; QALY¼Quality Adjusted Life Years over 12 months.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
1984-08-01T00:00:00.000
|
14817180
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jneurosci.org/content/4/8/1925.full.pdf",
"pdf_hash": "89ba474066e7a47965af6cde11598a7d559eaff7",
"pdf_src": "CiteSeerX",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:705",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "89ba474066e7a47965af6cde11598a7d559eaff7",
"year": 1984
}
|
pes2o/s2orc
|
Characterization of the rat mutant dystonic (dt): a new animal model of dystonia musculorum deformans
An inherited neurological disorder characterized by sustained twisting movements during waking has been discovered in Sprague-Dawley rats. The mutation follows an autosomal recessive pattern of inheritance and has been named dystonic (dt). The rat mutants are indistinguishable from normal littermates in open field behavior and climbing activity prior to postnatal days 9 to 10. Clinical signs begin to appear on day 10 and include twisting of the axial musculature, hyperflexion of the trunk, self-clasping of forelimbs and hindlimbs, and poor placement of the limbs during locomotion. No morphological lesions of neural or non- neural tissues have been observed with routine light microscopy. Dystonic rats demonstrate significantly elevated cerebellar norepinephrine levels, although levels in other terminal fields of the locus ceruleus are similar to those of normal littermates. No differences in the pattern or density of noradrenergic innervation were apparent in cerebellar tissue from dt rats examined with histochemical fluorescence techniques. These mutants were less sensitive than unaffected littermates to the akinesic effects of the dopamine blocker haloperidol. However, striatal dopamine levels were not reliably different from normal in dt rats, and their response to the movement- stimulating effects of apomorphine appeared normal. These findings suggest the presence of biochemical disturbances in the extrapyramidal system of dt rats. The dt rat may provide a useful model for human dystonia musculorum deformans.
follows an autosomal recessive pattern of inheritance and has been named dystonic (dt). The rat mutants are indistinguishable from normal littermates in open field behavior and climbing activity prior to postnatal days 9 to 10. Clinical signs begin to appear on day 10 and include twisting of the axial musculature, hyperflexion of the trunk, self-clasping of forelimbs and hindlimbs, and poor placement of the limbs during locomotion.
No morphological lesions of neural or non-neural tissues have been observed with routine light microscopy. Dystonic rats demonstrate significantly elevated cerebellar norepinephrine levels, although levels in other terminal fields of the locus ceruleus are similar to those of normal littermates.
No differences in the pattern or density of noradrenergic innervation were apparent in cerebellar tissue from dt rats examined with histochemical fluorescence techniques. These mutants were less sensitive than unaffected littermates to the akinesic effects of the dopamine blocker haloperidol. However, striatal dopamine levels were not reliably different from normal in dt rats, and their response to the movementstimulating effects of apomorphine appeared normal. These findings suggest the presence of biochemical disturbances in the extrapyramidal system of dt rats. The dt rat may provide a useful model for human dystonia musculorum deformans.
Sustained, involuntary twisting movements are characteristic of a variety of dyskinetic illnesses referred to as dystonias (Marsden, 1980). Dystonic symptoms accompany a number of disease states, including well known degenerative conditions, such as Huntington's chorea and Parkinson's disease, and a variety of inflammatory, cerebrovascular, iatrogenic, and metabolic disorders (Zeman, 1970). Among the array of clinically evident dystonic states are the hereditary torsion dystonias (dystonia musculorum deformans) (Fahn and Eldridge, 1976). Autosomal recessive and autosomal dominant forms are established disease entities in human populations.
However, no convincing pathological change has been identified in postmortem studies of brains from patients dying with the disease (Eldridge, 1970;Zeman, 1970).
The absence of demonstrable morphological lesions at the light microscopic level and the finding that dystonic movements may oEcur as a complication of L-dopa therapy in patients with Parkinson's disease (Marsden and Harrison, 1974;Marsden, 1976;Zeman, 1976) have encouraged speculation that a bior This work was supported by Grant NS18062 from the National Institute of Neurological and Communicative Disorders and Stroke. chemical disturbance may be the cause of the disease. Abnormalities in catecholamine metabolism have been suggested on the basis of some limited clinical data. Specifically, increased activity of plasma dopamine P-hydroxylase (Askenasy et al., 1980;Wooten et al., 1973), elevation of plasma norepinephrine (Zeigler, 1976), and decreased ventricular fluid levels of the norepinephrine metabolite, 3-methoxy-4-hydroxyphenylglycol (MHPG) (Wolfson et al., 1983), have been reported. There is also some evidence that patients with adult-but not childhoodonset dystonia may be distinguished by decreased cerebrospinal levels of the dopamine metabolite homovanillic acid (Tabaddor et al., 1978).
Investigations of animal models of torsion dystonia have been confined primarily to the dystonia musculorum (dt) mouse, an autosomal recessive mutation originally described by Duchen and his co-workers (1964). Numerous lesions have been identified in the dt mouse. These include peripheral nervous system and spinal cord lesions, such as axonal swellings, gliotic changes, and Wallerian degeneration (Duchen et al., 1964). Morphological changes have also been noted in the red nucleus and caudate (Messer and Strominger, 1980). Biochemical measurements have yielded additional evidence of central nervous system pathology in the mouse mutant. Messer and Gordon (1979) have reported data suggesting the presence of an endogenous inhibitor of glutamate uptake in the basal ganglia, and evidence of altered cerebellar noradrenergic metabolism has been found (Riker et al., 1981). It appears that exploration of the full extent of lesions in the mouse model of dystonia has only begun. However, because of the presence of .925 Vol. 4, No. 8, Aug. 1984 significant peripheral nervous system and spinal cord lesions which have not been observed in humans suffering from dystonia, the nature of the human and murine diseases may be dissimilar.
We have recently discovered a mutation in Sprague-Dawley rats with the clinical characteristics of torsion dystonia. This mutant, named clystonic (dt), is distinguished from the dt mouse by the absence of obvious central and peripheral neuropathological lesions. The data presented here describe the behavioral characteristics of the mutants, the pattern of inheritance, and the development of the dt syndrome. We also report the results of the anatomical, pharmacological, and neurochemical investigations with which we have endeavored to assess the potential value of this mutant as an animal model of human torsion dystonia. Regional measurements of central catecholamines were undertaken, and the behavioral effects of dopamine agonists and antagonists were examined.
Materials and Methods
Animals.
Mutant and phenotypically normal rats were obtained from the dystonic colony maintained at the University of Alabama in Birmingham. Mutants were obtained by mating heterozygotes. Pups born in the dystonic colony were routinely weighed on postnatal days 8, 10, 12, and 16 as a general index of health and to assess the ability of the mutants to compete with normal littermates for nutrition. From day 16 until sacrifice, dystonic rats received supplemental hand feedings of wet mash made from standard laboratory chow or Esbilac (Borden), a milk-based diet. Nine litters of 8 to 13 pups each were used for statistical analysis of the growth curves of rats from the dystonic colony. All the litters contained dystonic pups. A total of 17 male and 18 female dystonic rats and 28 male and 34 female normal rats were measured. Pedigree analysis. Seventy-six litters from the dystonic colony were analyzed over a 3-year period. Litters selected for analysis were from parents known to have produced affected young, but not all of these litters included mutant pups.
Behavioral observations. Pups of unknown phenotype were videotaped individually in an open field beginning on postnatal day 6 to determine the earliest age at which a reliable phenotypic differentiation of normal and mutant pups was possible. Each pup was videotaped daily for 1 to 2 min on postnatal days 6 to 12. Videotapes were reviewed to assess whether qualitative behavioral differences existed between normal and dt rats prior to the onset of obvious clinical symptomatol-WY.
A climbing test similar to that described by Altman and Sudarshan (1975) was also used to evaluate the locomotor development of dt pups.
A l-cm wire mesh incline was placed at 45,60, 70, or 80" depending on the age of the pups. Pups of 6 to 12 days of age were placed head up on the incline. A positive score was recorded if both hindlimbs moved upward at least 2 cm. Each pup was given three trials in which to display a positive response. At least 10 mutants and 20 normal littermates were tested on the developmentally appropriate inclines on each day.
Anatomical investigations.
Because dt rats have greater difficulty feeding than do normal rats, whole brain weights were measured in affected and unaffected pups as an index of brain development. Rats were decapitated and the brains removed and weighed at 16, 18, and 20 days of age. At least eight dt and eight phenotypically normal littermates were used at each age. Gross examination of brain conformation and routine light microscopy studies were performed using four mutants and four normal littermates 12 to 20 days of age. The rats were anesthetized and perfused intracardially with 10% buffered formalin. Neural and non-neural tissues were paraffin embedded, sectioned, and stained with hematoxylin and eosin. Some sections of neural tissue were stained with 1~x01 fast blue or periodic acid-Schiff reagent. Particular attention was directed toward examination of the spinal cord for gliotic and Wallerian degenerative changes. An additional four mutant and four normal rats of 21 to 25 days of age were anesthetized and perfused as described above. The brains were sectioned at 48 pm thickness on a cryotome and stained with cresyl violet. The striatum and red nucleus were examined in detail for morphological changes such as those reported in the mouse dystonia musculorum by Messer and Strominger (1980).
Since hyperinnervation of the cerebellum by the noradrenergic projection of the locus ceruleus has been reported in several mutants with motor disorders (Landis et al., 1975;Levitt and Noebels, 1981;Kos-trzewa et al., 1982;Muramoto et al., 1982), the cerebella from four dt and four normal rats were prepared for histofluorescent visualization of monoamines. Twenty-day-old rats were injected intraperitoneally with 200 mg/kg of pargyline and decapitated 1 hr later. The brains were rapidly removed and sag&tally sectioned at 30 pm in a cyrotome maintained-at -30°C. Sections were processed using the sucrosenotassium nhosohate-slvoxvlic acid method described by de la Torre 11980) and-examined with a Leitz Orthoplan microscope equipped for epifluorescence. Adjacent sections were mounted and stained with cresyl violet. Two additional animals from each group were examined without pargyline pretreatment. For comparisons between dt and normal rats, both midvermian and paravermian sections were examined. The continuity of the superior cerebellar peduncle with the cerebellar medulla served as a landmark for the paravermian sections. The sections were divided into quadrants, and areas of high, moderate, and low fluorescent intensity were examined and photographed in each quadrant.
Neurochemical analysis. Regional assays of norepinephrine (NE) and dopamine (DA) were performed in both motor and nonmotor areas in 10 mutants and 10 normal littermates between 16 and 25 days of age. After decapitation, the cerebellum, striatum, and hippocampus were dissected and frozen in liquid nitrogen as described by Morley and coworkers (1977). The telencenhalic tissue remaining after this dissection was also frozen and assayed: Fluorometric assays were performed using a modification of the technique described by Jacobowitz and Richardson (1978). Psychopharmacological studies.
The effects of haloperidol (Haldol, McNeil), a DA receptor blocker, and apomorphine (apomorphine HCl, Sigma), a DA agonist, were evaluated on the movements of dystonic rats between 16 and 25 days of age. To determine whether there were any drug effects specific to dystonic movements and postures, mutants were videotaped for 3 min in an open field before and after drug administration. The postdrug test was conducted 30 min after injection for animals treated with apomorphine (1, 2, or 3 mg/kg) and 60 min after for those treated with haloperidol (10 mg/kg). All drugs were administered intraperitoneally.
During these periods mutants were scored for characteristic movements or postures: torticollis; falls (movements involving the axial musculature and resulting in a fall to the side with the extremities rigidly extended); paw clasps (front paws, hind paws, or one front and one hind paw clasped together); runs (rapid forward movements in which left and right limbs are moved simultaneously). Each occurrence of these events was counted.
In addition to the videotaped trials, the locomotor activity of mutants and normal littermates was measured in automatic activity monitors after intraperitoneal injections of haloperidol or apomorphine. Activity following drug treatment was compared with activity following 0.9% saline injections in 18-to 20-day-old pups. At this age normal rats display catalepsy in response to neuroleptics (Baez et al., 1976) and increases in general motor activity in response to apomorphine (Lipton et al., 1980, Reinstein et al., 1978. Animals received drug or saline injections on successive days. The order of treatment was counterbalanced within each group.
The activity monitors were chambers measuring 18 X 28 cm and crossed by eight infrared beams positioned 3.25 cm apart on the long wall. For testing, rats were housed individually in standard clear plastic rodent cages that were placed in the activity chambers. The infrared beams crossed the cages 2 cm above the floor of the cages. The cage floors were covered with bedding material and warmed to 28°C. Breaks in the infrared beams were detected by phototransistors and counted by an Apple II computer. Rats were placed in the activity monitoring chambers and allowed 30 min to adapt to the environment prior to drug or saline injection. Immediately following the injections, the rats were returned to the chambers and activity was measured for 30 min after apomorphine injections of either 0.5 mg/kg or 1 mg/kg and for 60 min after haloperidol injections of 10 mg/kg. All rats were tested between 2 and 5 P.M., and both dt and normal rats were tested at the same time. Each rat received only one drug and dose. Beam crossings were summed over 5-min periods, and average activity scores were obtained for each session based on the 5-min counts. The data for the apomorphine and haloperidol tests were analyzed in separate repeated measures analyses of variance.
Results
Clinical syndrome.
Dystonic rats appear normal until 10 days after birth, when a stiff paddling gait with frequent falling to the side, excessive pivoting or circling, and torticollis (twisting of the neck, frequently from side to side) are exhibited in the waking state. The disorder progresses rapidly over the next few days. Advanced clinical signs include abnormal limb placement during locomotion, falling to the side, turning in of the forelimbs, self-clasping of forelimbs and hindlimbs, rigidity of the tail, and hyperflexion of the trunk (Fig. 1). There is resistance to passive movements of the limbs during the dystonic spasms. The limbs spring back to their original position, if displaced by the observer. The growth curves of normal and dystonic rats were analyzed in a phenotype x age repeated measures analysis of variance followed by Newman-Keuls tests. Separate statistical analyses were carried out for male and female rats. Body weights of normal and dystonic rats were similar at least until postnatal day 16, when normal pups began to show evidence of more successful competition for nutrition than mutants, as indicated by a slightly higher mean body weight. Sixteen-day-old male but not female mutants weighed significantly (p < 0.05) less than normal pups of the same age and sex. By day 16 the mean body weight for normal male rats was 25.3 gm (SD = 4.0) and for dystonic males, 22.2 + 2.8 gm. For female rats mean body weights on day 16 were 24.2 f 4.4 gm for normal rats and 21.9 +: 4.8 gm for mutants. The weight differences between affected and unaffected pups are exaggerated after weaning, although the mutants can be maintained in apparent good health for at least 30 to 35 days with hand feeding.
Pedigree analysis. A partial pedigree of a rat family with dystonia is shown in Figure 2. Of 670 progeny included in the pedigree analysis, 170 had neurological signs and 509 were phenotypically normal. The ratio of diseased pups to total pups (1:4) is that expected for an autosomal recessive trait. Males and females were affected equally, and phenotypic expression of the mutation occurred only in homozygotes.
Open field analysis. Prior to day 10, dystonic pups displayed motor activities in the open field that were qualitatively similar to those of normal littermates. The activities evaluated included head elevation, grooming, pivoting, crawling, and stability in the quadruped stance. On day 10, when normal littermates began coordinated crawling, dystonic pups showed only a few asymptomatic steps with their movement dominated by pivoting and frequent falling to the side. At 11 days of age, normal pups showed more coordinated crawling, a decline in pivoting behavior, and only a rare loss of balance from a quadruped stance. In II-day-old dystonic pups, pivoting behavior increased and ability to maintain a quadruped stance was limited. Torticollis, alternating irregularly from side to side, and frequent falling were apparent at this age. By 12 days of age, dystonic pups are readily differentiated from normal littermates even by an untrained observer.
Climbing tests. No reliable difference in climbing ability between normal and dystonic pups was evident when pups were tested on 45" or 60" inclines through day 10. Significant differences in climbing ability began to emerge in 9-and lo-day-old pups on 70" and 80" inclines ( Fig. 3). Normal pups showed significantly more climbing ability than dystonic pups at 9 days of age on the 70" incline and at 10, 11, and 12 days on both the 70" and 80" inclines. Normal and dystonic rats are equally capable of climbing a 60" incline through day 10. Normal rats show more climbing ability than 9-to 12.day-old dystonic rats on a 70" incline. Data were analyzed by the x2 test.
Anatomical investigations.
Mean brain weights were identical in 16-day-old dt and normal rats (n = 14/group) but differed significantly in older rats, when analyzed in a phenotype x age analysis of variance followed by Newman-Keuls tests. Brain weights in normal rats increased by 14% over the interval from postnatal days 16 to 20. However, dystonic rats had similar brain weights at 16, 18, and 20 days of age. The mean brain weight (+ SD) of the normal pups on day 20 was 1.39 f 0.05 gm and of the mutants, 1.24 f 0.07 gm (n = S/group).
Gross examination of brain conformation revealed no difference between normal and dystonic rats. Light microscopic examination of neural and non-neural tissue also failed to reveal significant morphological differences. Specifically, the dt rats showed none of the gliotic or Wallerian degenerative changes seen in the spinal cord or peripheral nervous system of the dt mouse (Duchen et al., 1964), nor were the degenerative changes that have been observed in the red nucleus or striatum (Messer and Strominger, 1980) of the dt mouse in evidence.
Examination of the noradrenergic fibers of the cerebellum in dt rats revealed a pattern of innervation similar to that seen in normal rats (Fig. 4). Fluorescent varicosities were evident in all cerebellar layers, although the highest concentration was found in the Purkinje cell layer. Normal and dt rat cerebella could not be discriminated on the basis of density of fluorescent terminals. However, the intensity of fluorescence appeared to be increased in the mutants. Neurochemical analysis. A significant elevation of NE (42%) was found in the cerebellum of the mutant rats in comparison with normal littermates (Table I). However, no reliable difference was evident in the hippocampus or the remaining telencephalon. Nor were any significant differences apparent in DA levels of normal and mutant rats in either the striatum or the remaining telencephalon. Tissue section weights did not differ reliably in mutant and normal rats in this sample.
Psychopharmacological studies. Mutant pups showed no inhibition of dystonic movements and postures in 3-min open field tests following a 10 mg/kg dose of haloperidol. The mean number of dystonic movements (+-SE) counted before and after haloperidol administration were 46.2 +-5.1 and 44.2 +_ 16.2, respectively, (n = 4). The only reliable change observed in a specific behavior characteristic of the dt rat was an increase in the number of "runs" following haloperidol injection. This behavior increased from a mean frequency (number per 3-min period f SE) of 4.8 f 2.1 to 11.0 + 1.6 (paired t = 9.99, df = 3, p < 0.01). In the 3-min open field observation, apomorphine did not appear to produce any overall effect on the frequency of dystonic movements. Only at the highest dose used (3 mg/ kg) was any statistically significant change in the frequency of a specific movement or posture observed. Observations made on both normal and dt rats in the automatic activity monitors indicated that the activity levels of the two groups were not significantly different during the saline trials. Haloperidol produced akinesia in both groups (F(lJ6) rats. Arrows 1 to the Purkinje cell layer. Similar patterns of innervation were evident in both types of rats, although fluorescence intensity was typically gr in dt rats. = 71.0, p < 0.0001). However, the effect was not as pronounced Apomorphine increased activity levels in both groups 01 in the dt animals, as indicated by a significant group x trial in a dose-dependent manner. However, only the 1 mg/kg interaction (F(1,16) = 4.49, p < 0.05). These data are summa-of apomorphine increased activity levels significantly a rized in Table II where activity following drug treatment is those of saline trials (F(1,28) = 6.17, p c 0.02) (Table expressed as a percentage of the activity that occurred during There was no indication of a difference in the effect of the the saline trials.
on the two groups.
Discussion
The dt rat displays clinical symptoms of dystonia with an onset at postnatal day 9. The disease follows an autosomal recessive pattern of inheritance and is not associated with any obvious morphological lesion of the central or peripheral nervous system. However, the results of pharmacological and neurochemical studies suggest the presence of abnormalities in the extrapyramidal system. At the behavioral level, the symptoms seen in the dt rats are similar to those described for humans with dystonia musculorum deformans (Fahn and Eldridge, 1976;Marsden, 1976). The abnormalities seen in the dt rat involve sustained twisting of the limbs or axial musculature, are evident only in the waking state, and occur following a period of normal development. The abnormal postures displayed by the dt rat may be maintained for several seconds, and during this time the muscles resist passive stretch. As in the autosomal recessive form of torsion dystonia, once apparent the clinical symptoms of the dt rat progress rapidly. The dystonic spasms of an affected pup can be seen in the home cage, but the severity and frequency are greatly increased by placing the animal on a smooth surface in the center of a large open area.
Neuroanatomically, the disease of the dt rat shares with dystonia musculorum deformans an absence of any gross morphological abnormality or sign of degenerative processes in either the central or peripheral nervous system. The apparent absence of degenerative changes in the basal ganglia in Nisslstained material from dt rats showing advanced clinical signs is complemented by recent studies of Golgi-impregnated material (McKeon et al., 1984). These studies have also failed to indicate conspicuous cytoarchitectural or morphological changes in the striatum of dt rats. No qualitative differences were detected in the cell types represented, the relative abundance of different cell types, or the dendritic and somatic characteristics of the identified cell types of normal and dystonic rats.
Clearly, more detailed anatomical investigations may reveal morphological disturbances in both dt rats and in humans dying with idiopathic torsion dystonia. However, the light microscopic studies of the nervous system of the dt rat conducted to date serve to distinguish the disease of this mutant from other neurological diseases. For example, there is no evidence in the dt rat of the cerebellar hypoplasia or abnormalities in foliation or cell arrangement found in several other neurological mutants with movement disorders (e.g., Landis et al., 1975). Nor are the central and peripheral nervous system abnormalities seen in Nissl-stained material from the dystonia musculorum mouse in evidence in the dt rat (Duchen et al., 1964;Messer and Strominger, 1980). The apparent morphological integrity of the extrapyramidal system in dt rats with advanced clinical signs also distinguishes the disease of this mutant from degenerative diseases of the basal ganglia, such as Parkinson's disease and Huntington's chorea. The presence of elevated NE levels in the cerebellum of the dt rat is consistent with evidence of biochemical abnormalities in the noradrenergic systems of humans with torsion dystonia. Specifically, decreased levels of the NE metabolite MHPG have been reported in a subset of patients with dystonia, those displaying a childhood onset and rapid progression of symptoms (Wolfson et al., 1983). The MHPG levels of this group were significantly lower than those of another group with more focal adult-onset dystonia as well as being lower than a group of patients with other neurological disease, including some with motor disorders. Childhood-onset dystonia is generally thought to be inherited (Marsden and Harrison, 1974). Thus, alterations in central noradrenergic metabolism may be of particular significance in the etiology or pathogenesis of inherited dystonia.
Cerebellar NE metabolism may be altered as a compensatory response to another as yet undetermined biochemical abnormality. Elevations in cerebellar NE levels have been induced experimentally in the rat on or before postnatal day 21 with severe undernutrition during the pre-and early postnatal period (Miller et al., 1982). Several lines of evidence suggest that undernutrition is not the cause of the differences we report in the dt rat. First, growth curves for the dt rat appear normal during the early postnatal period. The mutants begin to lag their unaffected littermates only after day 16, suggesting that their nutrition is adequate through the suckling period. Brain weight follows a pattern similar to body weight. This is not the case in studies of undernutrition in which increases in NE concentration are correlated with a decrease in brain weight (Shoemaker and Wurtman, 1971;Stern et al., 1975). In addition, elevations in NE levels are widespread in undernourished animals, rather than localized as in the dt rat.
Alterations in cerebellar NE have been reported in numerous mouse mutants with abnormal movement patterns, including the dystonia musculorum mouse (Riker et al., 1981). In the dt mouse, steady-state levels of cerebellar NE appear similar to control levels; however, levels of the NE metabolite MHPG are elevated. In many mutants, including the staggerer, weaver, reeler, and Purkinje cell degeneration mice, elevations in cerebellar NE concentrations have been reported but appear to be a consequence of cerebellar hypoplasia (Landis et al., 1975;Kostrzewa et al., 1982;Muramoto et al., 1982). The tottering mouse, a mutant that displays focal motor seizures with a postnatal onset of 12 to 14 days, is an exception. In this mutant a hyperinnervation of the cerebellum, hippocampus, neocortex, and other areas may be due to an increase in the number of noradrenergic axons in the terminal fields of the locus ceruleus (Levitt and Noebels, 1981). The dt rat presents yet a different pattern. The rat mutant resembles the tottering mouse in demonstrating elevated cerebellar NE levels but normal cerebellar weight; however, the neocortical and hippocampal levels of NE are not reliably different from control values in the dt rat. This suggests that in the dt rat any hyperinnervation must involve only specific cerulear terminal fields. Our microscopic examination of the cerebellum using glyoxylic acid histochemical fluorescence does not support the hypothesis of hyperinnervation in the dt rat. However, these observations must be interpreted with caution. As Kopin and others (1974) have pointed out, there are limitations on the use of fluorescence histochemistry as a quantitative technique. It may not be possible to discriminate an increase in the number of axons of the magnitude suggested by our neurochemical measurements. Cerebellar NE is found in neurons with cell bodies in the locus ceruleus and other brainstem areas (Pickel et al., 1974;Tohyama, 1976). The NE terminals innervate somata and major dendrites of Purkinje cells (Chan-Palay, 1977). The NE innervation inhibits the firing of Purkinje cells (Bloom et al., 1971;Hoffer et al., 1971) and may play an important modulatory role in cerebellar physiology by altering the efficacy of both inhibitory and excitatory inputs to the Purkinje cells (Moises et al., 1981(Moises et al., ,1983. Other recent work on the dt rat has shown that glutamic acid decarboxylase (GAD) activity is increased selectively in deep cerebellar nuclei of the mutants in comparisons with littermate controls (Oltmans et al., 1984). A change in GAD activity at this site may indicate an alteration in the activity of the GABAergic Purkinje cells (Chan-Palay, 1982). Taken together, the findings of increased NE levels and GAD activity may reflect significant changes in cerebellar function. Further investigation will be needed to determine whether the elevated cerebellar NE seen in the dt rat is causally related to either the changes in GAD activity found in the deep nuclei or to the appearance of the dystonic movements.
The absence of obvious morphological lesions of the basal ganglia has not ruled out the basal ganglia as the potential site of the primary defect in dystonia musculorum deformans. Dystonia is frequently associated with drug reactions that are presumed to involve the basal ganglia (Heline, 1978). These include complications of L-dopa therapy (Keenan, 1970) and both acute and late-onset responses to neuroleptic drugs (Crane, 1968;Burke et al., 1982). In some patients, anticholinergic and dopaminergic agents improve and cholinergic agonists and dopaminergic blockers exacerbate the symptoms of dystonia, leading some investigators to propose that dystonia is caused by a deficiency of dopamine coupled with an excess of acetylcholine (Garg, 1982;Stahl et al., 1982). However, the conclusions to be drawn about the etiology of torsion dystonia from the response of patients to drug treatments is by no means clear (Marsden, 1981), and reports of altered homovanillic acid levels in the cerebrospinal fluid of dystonic patients suggest involvement of dopaminergic systems in only certain classes of patients (Tabaddor et al., 1978). Measurements of DA in the dystonic rat suggest that dystonia can be present without a loss of striatal DA. However, the relative insensitivity of the dt rat to the DA blocker halo-peridol seen in this study and other studies (McKeon et al., 1984) may indicate a defect in the pathway by which DA blockers exert their effects on movement. The site of such a defect is not clear at present. The apparently normal response of the dt rat to apomorphine may suggest that DA receptors are not directly involved. However, the anatomical site responsible for apomorphine's effects on movement has not been precisely localized (Fink and Smith, 1980). Receptors in both the nucleus accumbens septi and the corpus striatum may be involved (Kelly et al., 1975). Thus, different populations of DA receptors may mediate the increased movement seen with apomorphine and the akinesic effects of haloperidol.
Comparisons of spiroperidol binding parameters in the striatum of dt and normal rats, however, do not suggest that the decreased responsiveness of the dt rat to haloperidol is due to a defect at the level of the striatal DA receptor (McKeon et al., 1984).
The adequacy of the dt rat as a model of human idiopathic torsion dystonia is not easily assessed. As discussed by Marsden (1976), the diagnosis of the human disease is based on the presence of clinical symptoms and the absence of other neurological deficits or probable causes. However, the availability of a rat mutant with a spontaneously occurring dystonic syndrome offers significant opportunities for testing hypotheses about the cause of dystonic symptoms. The search for biochemical markers for the disease and the evaluation of potential therapies may be facilitated.
Insofar as comparisons can be made given our current scant knowledge of the human disease, the dt rat appears to be a promising model for further study.
|
v3-fos-license
|
2022-02-23T16:25:01.050Z
|
2022-02-01T00:00:00.000
|
247031521
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.cureus.com/articles/86853-neuropathic-pain-in-neurologic-disorders-a-narrative-review.pdf",
"pdf_hash": "84fa9e1d18783e7bb3eb14c28c48d114ff1c9f4d",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:708",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "4d73b093502a6536f567300b99832eca8b4da0da",
"year": 2022
}
|
pes2o/s2orc
|
Neuropathic Pain in Neurologic Disorders: A Narrative Review
Neuropathic pain is defined as a painful condition caused by neurological lesions or diseases. Sometimes, neurological disorders may also be associated with neuropathic pain, which can be challenging to manage. For example, multiple sclerosis (MS) may cause chronic centralized painful symptoms due to nerve damage. Other chronic neuropathic pain syndromes may occur in the form of post-stroke pain, spinal cord injury pain, and other central pain syndromes. Chronic neuropathic pain is associated with dysfunction, disability, depression, disturbed sleep, and reduced quality of life. Early diagnosis may help improve outcomes, and pain control can be an important factor in restoring function. There are more than 100 different types of peripheral neuropathy and those involving sensory neurons can provoke painful symptoms. Accurate diagnosis of peripheral neuropathy is essential for pain control. Further examples are represented by gluten neuropathy, which is an extraintestinal manifestation of gluten sensitivity and presents as a form of peripheral neuropathy; in these unusual cases, neuropathy may be managed with diet. Neuropathic pain has been linked to CoronaVirus Disease (COVID) infection both during acute infection and as a post-viral syndrome known as long COVID. In this last case, neuropathic pain relates to the host’s response to the virus. However, neuropathic pain may occur after any critical illness and has been observed as part of a syndrome following intensive care unit hospitalization.
system attacks the myelin sheath around certain nerve fibers. It is a complex disease with several known phenotypes [3]. It can result in functional motor and sensory deficits caused by immune-mediated inflammatory processes, demyelination, and axonal damage [4]. This damage is related to several idiopathic inflammatory-demyelinating diseases of the central nervous system [5].
While MS involves demyelination in the central nervous system, disease severity, symptomatology, and disability do not correlate well with the degree of demyelination [6]. Earlier it was thought that MS was mediated primarily by pro-inflammatory T-cells that were activated by the autoimmune system. However, it is now believed that β-cells, dendritic cells, and monocytes also play the main role [7]. Oligodendrocytes produce myelin to metabolically support the nerve axons [8]. In animal studies, it was found that new oligodendrocytes form to replace damaged myelin, but this task is not performed by existing oligodendrocytes. It is not clear if, and to what extent, this may occur in humans [9].
The prevalence of MS ranges from 2 per 100,000 in Japan and Sub-Saharan Africa to 100 per 100,000 in North America and Northern Europe [10]. MS reduces life expectancy by six to seven years, although a study found no differences in mortality in the first 20 years of the disease [11]. Relapsing-remitting MS, the most frequently encountered form of the disease, is more prevalent in women than in men [11]. In the United States, the highest prevalence of MS occurs among people between the ages of 45 and 49 years [12]. MS can be heritable but genetic markers have not been completely elucidated [11]. A study from Spain reported that the mean age at the first symptom of MS was 32.2 years, the average delay to diagnosis was 3.1 years, and 71.2% of patients had relapsing-remitting MS [13]. Cold countries seem to have a higher prevalence of MS [14] but there is no bright line dividing North and South in this respect [15]. Globally, about 2.5 million people have some form of MS [16].
The clinical course of MS can be challenging to describe because of the range of different symptoms, their relative severity, and outcomes [17]. The main phenotypes of MS were at first defined only clinically: relapsing-remitting MS, primary or secondary progressive MS, and progressive-relapsing MS. When these original phenotypes were reconsidered, the radiologically isolated syndrome and the clinically isolated syndrome were added to improve the descriptors by incorporating activity and disease progression into the phenotype [18]. The inflammatory response in MS is transient. The demyelination and remyelination phases lead to periods of relapse and remission, but over time, remyelination effects are not durable. Relapsingremitting MS is clinically characterized by episodes of acute exacerbation of neurologic symptoms punctuated by periods of partial recovery, but with no apparent progression of the disorder. However, primary progressive MS is characterized by a steady, progressive loss of neurologic function with no distinct episodes of remission. Secondary progressive MS likewise manifests as a steady loss of neurologic function but maybe punctuated by episodic remission. In progressive forms of MS, microglial activation and the widespread neurological damage it promotes result, over time, in neurodegeneration, prominent symptoms, disability, and concomitant neuropathic pain [17].
The radiologically isolated syndrome is diagnosed when incidentally identified abnormalities in an imaging study suggest demyelination before there are clinical signs or symptoms of MS [19]. The clinically isolated syndrome occurs when a patient exhibits clinical signs or symptoms suggesting MS but does not meet the diagnostic criteria for MS. These symptoms are usually acute, monofocal clinical events, such as abnormalities of the optic nerve [20]. These syndromes are emerging as important for MS investigations, even if they are arguably not true phenotypes.
Approximately 29-86% of MS patients complain about pain, and central neuropathic pain is present in the majority of cases [21]. About 80% of MS patients develop spasticity as well [21]. Among the most frequently reported painful symptoms of MS are pain in the extremities, trigeminal neuralgia, low back pain (LBP), and headache, none of which reliably respond to conventional analgesic therapy [21]. Painful symptoms are more likely to be reported with disease progression than at its outset [22]. Of course, pain may occur at any point in the disease and, in some cases, reported pain is secondary to spasticity, mood disorders, and fatigue [23]. While MS patients report low levels of satisfaction with analgesic therapy, about 30% of all drugs prescribed to MS patients are prescribed to help manage pain [23]. Patient self-reports of pain are important to consider in MS because the incidence and severity of pain in MS do not necessarily correlate with the extent or severity of the underlying disease [24].
Although headache is a common symptom of MS, its prevalence and type vary. Migraine is frequently observed, but it remains unclear whether migraine is merely associated with MS or is a true comorbidity [25]. In a study of MS patients (n=137), the lifetime prevalence of headache was 57.7%, with 31.9% having tension-type headache and 25.0% migraine. Migraine has been significantly correlated to relapsingremitting MS but causal mechanisms have not been identified [26]. In some cases, patients had pre-existing headaches that either continued or were exacerbated by MS [27]. In a study comparing MS patients to the general population, the relative frequency of migraine headaches among MS patients was three-fold higher than for the general population both for women (55.7% vs. 17.1%, prevalence ratio 3.26, p<0.001) and men (18.4% vs. 5.6%, prevalence ratio 3.29, p<0.001), although migraines occur more frequently in women [28]. In a study of 18,955 MS patients, migraine prevalence was 7%, compared to 2.8% in the control group [29]. The "coexistence" of migraine with MS is well documented, but a causal link is missing [30]. A meta-analysis found that the prevalence of migraine in MS patients was inconsistent, but varied by geography [31]. It has The estimated number of MS patients around the world is increasing and expected to continue to increase, with the most striking increases in Africa (59% increase in prevalence from 2013 to 2020), the Americas (87%), and Southeast Asia (58%) [38]. As MS is an incurable, lifelong condition that increases the risk of neurological dysfunction, it is crucial that it is better understood with respect to safe and effective treatment regimens.
Painful peripheral neuropathy
There are several different types of peripheral neuropathy, and those involving the sensory neurons can be associated with moderate to severe pain. Many forms of neuropathy involve all three of the main types of nerves to some extent: sensory, motor, and autonomic nerves. Neuropathy may occur with damage to smalland/or large-fiber myelinated or unmyelinated nerves. Peripheral neuropathy typically involves small-fiber nerves and can provoke classical neuropathic painful symptoms perceived as burning, "electrical," or shooting pain as well as allodynia [39].
Peripheral neuropathies are often polyneuropathies, involving multiple nerves. They may be described by the type of nerves affected (sensory, motor, autonomic), the site of nerve injury (distal or proximal), the nerve component affected (demyelinating or axonal), etiology (e.g., diabetic neuropathy), or pattern (symmetric or asymmetric) [40]. Length-dependent neuropathy affects the longer fibers, as a result of which the more distal areas of the body are affected first. Described as a "dying back" phenomenon, length-dependent neuropathy typically affects toes and fingers first and is a form of sensory neuropathy. In contrast, asymmetrical neuropathy can be purely sensory with a patchy distribution and no discernible pattern; asymmetrical neuropathy is common in cancer patients but may occur in other patients as well. Asymmetrical sensorimotor neuropathy involves multiple motor neurons and can lead to vasculitis, and it is relatively rare.
About two-thirds of all patients with peripheral neuropathy have painful symptoms, which can be challenging to manage effectively [41]. The diagnosis and treatment of peripheral neuropathy can be complicated further by the fact that some patients are asymptomatic or have only diffuse symptoms [42]. Peripheral neuropathy is a common disorder, and the large subpopulations who are geriatric, obese, and/or have diabetes are at elevated risk [43]. The prevalence of peripheral neuropathy is 2.4% in the general population. It increases to 8% in individuals above 55 years [44,45]. It occurs in 12.1% of obese but normoglycemic patients and in 40.8% of obese individuals with diabetes [46].
Length-dependent peripheral neuropathy is symmetric and begins at the termini of the longest nerves, that is, in the feet and toes. Typically, a loss of sensation, numbness, or unusual sensory symptoms, such as tingling, burning, "pins and needles," or electrical sensations, occurs antecedent to motor weakness [47]. Pain locations are symmetric and gradually move upward, with symptoms in the hands manifesting around the same time as symptoms occur in the knee and leg. There are no deficits in proprioception until the condition advances [47]. Polyradiculoneuropathy and multifocal neuropathy may arise in the presence of cancer, infections, vasculitis, inflammation, or other conditions [47].
The diagnosis of peripheral neuropathy should include a detailed patient and family history, physical examination, serologic testing, and an assessment of the patient's comorbidities. In most cases, a lengthdependent peripheral neuropathy can be diagnosed, but in 20% of cases, the etiology of the peripheral neuropathy remains unknown [48]. It is important to determine whether the neuropathy is axonal or involves demyelinated fibers, as treatment courses differ. The underlying cause of the neuropathy, if known, can be crucial to finding an effective treatment. For instance, peripheral neuropathy may be caused by cancer or chemotherapy, a viral infection, or vasculitis. Axonal peripheral neuropathy is caused by inflammation, infection, ischemia, metabolic disruptions, or may be genetic. A range of conditions is associated with neuropathy: diabetes, vitamin B-12 deficiency, renal dysfunction, hypothyroidism, and CoronaVirus Disease (COVID), among others. Neuropathy may also arise from long-term excessive alcohol consumption, certain prescription drugs, or the use of other neurotoxic agents [47,49].
Diagnostic tests may involve electromyography (EMG) and nerve conduction studies. EMG tests assess the integrity of the large myelinated Aβ fibers, but cannot evaluate the C-fibers or the small-diameter AΔ fibers. Skin biopsies can determine the intraepidermal nerve fiber density (somatic unmyelinated C-fiber termini) and have 90% sensitivity and 97% specificity [48]. In some cases, a nerve biopsy may be appropriate [48]. Regardless of the etiology or clinical course of painful peripheral neuropathy, it has an adverse effect on the quality of life [50].
Gluten neuropathy refers to peripheral neuropathy that is actually the extraintestinal manifestation of gluten sensitivity, defined as having positive gliadin antibodies and/or tissue transglutaminase or endomysium antibodies [51]. In a case-controlled study of 53 patients with gluten neuropathy and 53 matched controls, the gluten neuropathy patients had lower scores in physical functioning, energy, and overall health [51]. In patients with gluten neuropathy, dietary changes may improve both gluten sensitivity and relieve neuropathic symptoms [52]. In a study of 60 patients with gluten neuropathy (77% men, with a mean age of 69.9 years), 55.0% suffered from neuropathic pain. Patients without pain were more likely to be on a strict gluten-free eating plan (55.6% vs. 21.2%, p=0.006), and multivariate analysis showed that glutenfree diets were associated with lowering the likelihood of neuropathic pain by 88.7% [53].
In treating peripheral neuropathy, it is important to recognize that pain may occur with any type of neuropathy and should be treated. Neuropathy is so diverse that a single patient may have more than one type of neuropathy. In addition to an accurate diagnosis, risk factors for neuropathy should be addressed and managed, if possible, in order to stop disease progression. Referral to neurologists or pain specialists may be appropriate. Analgesic regimens for peripheral neuropathy are similar to those used for other types of neuropathic pain: multimodal analgesic therapy involving anticonvulsants, antidepressants, opioids, and NSAIDs. Nonpharmacological treatments may be helpful for some patients as well and may be combined with pharmacologic regimens [47].
Management of chronic neuropathic pain
Neuropathic pain was first defined by the International Association for the Study of Pain (IASP) as pain initiated or caused by a primary lesion or dysfunction in the nervous system [2]. This definition was revised in 2008 by the IASP Special Interest Group on Neuropathic Pain and accepted by the IASP in 2011 as "pain caused by a lesion or disease of the somatosensory nervous system" [54,55]. This revised definition eliminated the problematic word "dysfunction," which would have classified fibromyalgia as neuropathy [54]. The IASP has since adopted the term "nociplastic" to better describe fibromyalgia and other painful conditions [56]. However, an optimal definition of neuropathic pain remains elusive, as lesions of the peripheral or central nervous system may occur in patients with a concurrent neurological dysfunction and pain occurs only in a subset of these patients [54]. In other words, neural lesions do not mean the patient has neuropathic pain [54]. Thus, the presence of a neurological lesion or neurological disease does not guarantee the presence of neuropathic pain, which seems to be more associated with induced changes in the peripheral and central nervous systems, such as alterations of pain modulation systems, central sensitization, and others. The lack of clear definitions has impeded progress in better evaluating, grading, and assessing neuropathic pain [57].
Neuropathic pain can be readily classified as peripheral or central. Peripheral neuropathic pain includes postamputation pain (sometimes called "phantom limb pain"), trigeminal neuralgia, painful radiculopathy, painful polyneuropathy, postherpetic neuralgia (including the optical form [58]), peripheral neuropathy, and peripheral nerve injury pain. Central neuropathic pain includes post-stroke pain, neuropathic pain associated with spinal cord injury, and central pain syndromes involved in MS [57,59,60].
Despite ongoing efforts to better understand the etiology, diagnosis, and treatment of neuropathic pain, many patients do not receive adequate analgesia for their neuropathic pain. The prevalence of neuropathic pain in the general population has been estimated at 7-8%; however, this number must be considered cautiously, as we lack validated diagnostic criteria for use in surveys of the general population [58]. Two large surveys of the general population in the United Kingdom and France estimated neuropathic pain prevalence at 8.2% and 6.9%, respectively [61,62]. The prevalence is substantially higher in certain subpopulations, such as patients with diabetes or cancer.
Most of the epidemiologic data about neuropathic pain come from defined patient cohorts in specific studies. A systematic review of epidemiological studies of neuropathic pain found a prevalence of chronic pain with a neuropathic component to be 3-17% and neuropathic pain associated with a specific disease or condition to vary depending on the condition, estimating the overall population prevalence of neuropathic pain to be between 7% and 10% [63]. The burden of neuropathic pain, including healthcare costs plus lost productivity, is impossible to quantify but is substantial [64].
Numerous guidelines are available for neuropathic pain but are not consistently translated into clinical practice [65]. Guidelines for neuropathic pain care are issued by the IASP [66][67][68], the European Federation of Neurological Societies [69][70][71], the National Institute for Health and Care Excellence (NICE), and the Canadian Pain Society [72,73], among others.
Neuropathic pain has been associated with dysfunction, disability, anxiety, depression, sleep disturbances, and reduced quality of life [50,74,75]. The pain of the condition may be substantially influenced by emotional, behavioral, and psychosocial factors. In general, neuropathic pain is associated with overall poor health [76]. Optimal management of neuropathic pain is a clinical necessity, but it goes far beyond just "pain control" [77].
The optimal management of neuropathic pain begins with an accurate diagnosis because there are many different types and manifestations of neuropathic pain. Using the three-L approach to diagnosis ("listen, look, and locate"), the pain site(s) should be identified and pain characteristics described. Pain characteristics may include things such as deep, burning, throbbing, electrical, "pins and needles," pain qualities as well as whether the pain is intermittent or continuous, if it waxes and wanes, and if it migrates around the body. In addition to the detailed patient history, a physical examination should also be performed. Clinicians must consider the patient's medical history, underlying conditions, comorbidities, current pharmacological regimen, and what things the patient may have discovered to relieve or exacerbate the neuropathic pain [78]. Early diagnosis improves the likelihood of good outcomes. In treating the neuropathic pain patient, a holistic and patient-centric approach should be used [79]. A key objective in treatment must be pain relief, which may improve physical function, sleep quality, mood, and sense of wellbeing. In turn, these improvements can boost the quality of life and allow the patient to exercise or move more, which may further improve the physical function [80].
There is general agreement among the numerous guidelines for pharmacologic treatment of neuropathic pain, all of which advise a stepwise approach from the first-line to other treatments [79]. The first-line approach includes tricyclic antidepressants (TCAs) and gabapentinoids [81]. For localized neuropathic pain, some suggest lidocaine patch as the first-line treatment, but it should be noted that anticonvulsants and antidepressants are supported by evidence-based guidelines, whereas the lidocaine patches are supported by data from randomized controlled trials [82]. Pharmacological therapy that affects peripheral sensitization would include capsaicin, local anesthetics, and TCAs. Pharmacological therapy for pain associated with central sensitization would include α2δ ligands (gabapentinoids) [81], TCAs, opioids, and tramadol. In some cases, it is effective to target descending pain modulation using SNRIs, TCAs, opioids, or tramadol [79,81,83,84]. More recently, botulinum toxin has been used to treat certain cases of neuropathic pain; it acts by inhibiting pro-inflammatory mediators and peripheral neurotransmitters from the sensory nerves [85,86]. A treatment algorithm has been published that shows the stepwise progression of various approaches; combination therapy is often appropriate for treating neuropathic pain [80] (Table 1).
summary of a treatment algorithm for neuropathic pain showing the stepwise progression of treatments and considerations
Source: [79].
The optimal treatment plan should be customized for each patient, considering potential adverse effects, special populations, contraindications, tolerability, drug-drug interactions, and pharmacokinetic profiles. For instance, TCAs may not be appropriate for geriatric patients, particularly at high doses. Opioids should be avoided or used only with careful clinical supervision in patients at risk for substance use disorder [79,80].
Prescribing recommendations for first-and second-line treatments are indicated in Table 2. pain. The old clinical adage for titration, "start low and go slow," applies to these agents.
The prescribing choices for the first-line therapy must be carefully considered and customized for each patient [79]. For example, gabapentinoids are recommended as a possible first-line drug class; there are two main drugs with the same mechanism of action to consider: gabapentin and pregabalin. While both are similarly efficacious, pregabalin has a better pharmacokinetic profile, but gabapentin is less expensive. Selective serotonin-norepinephrine reuptake inhibitors (SSNRIs) as well as TCAs are also first-line treatments. In general, TCAs are more effective than SSNRIs, but SSNRIs have a better safety profile. Duloxetine may be a good choice for geriatric patients, as its number needed to harm (NNH) is 17.5 [87]. However, care should be taken in assessing any drug for neuropathy strictly by its NNH or NNT due to heterogeneity among studies [88].
Combination treatments, in which two or more agents with complementary mechanisms of action are used, are familiar analgesic regimens [89,90]. The advantages to combination therapy are several: compared to monotherapy, combination treatment may offer superior analgesia, better tolerability, and minimal common side effects, such as anxiety, depression, and disordered sleep [91]. However, the role of combination therapy for treating chronic neuropathic pain is not particularly well studied. For example, it is not clear as to which, or if, combination(s) of antiepileptics, antidepressants, opioids, topicals, and other agents provide benefits. A systematic review of 21 studies found only limited evidence in support of two-drug combination treatments for neuropathy, mainly because the small number of studies evaluated several different potential combinations [92]. A double-blind study of neuropathy patients treated with either oral nortriptyline, oral morphine, or both in combination found that combination therapy was more efficacious than either agent alone [93]. While combination therapy has numerous advantages, it can also contribute to polypharmacy. Many patients with chronic neuropathy are already taking medications for comorbidities, so combination therapy must be viewed cautiously [94]. In this connection, it is important to avoid prescribing two or more drugs with similar adverse events, such as central nervous system depression [92].
The first step in managing chronic neuropathic pain is to initiate the treatment with one or more of the first-line agents (gabapentinoids, SNRIs, and TCAs) and monitor the patient's response. If the patient has adequate pain relief and tolerates the medication, then that regimen may continue and the clinical team should monitor the patient to be sure effects are durable. If pain relief is partial, then another first-line medication should be added to see if there is any further improvement. Should the patient not get any pain relief, then the treatment should be discontinued and another first-line treatment used. There can be cases where all first-line options, either as monotherapy or in combination, fail to provide adequate analgesia. In such cases, treatment should advance to second-line approaches (tramadol) and, if those are not effective, then to third-line treatments. In some cases, referral to a pain specialist or neurologist may be appropriate [79,80].
Certain agents have produced equivocal results with respect to control of neuropathic pain. Tapentadol, a µopioid receptor agonist, has a mechanism of action that inhibits the descending modulation of pain signals. It is not particularly effective in peripheral neuropathy, although overall it is better tolerated and has fewer side effects than oxycodone [79,80]. In general, oxycodone has shown a degree of effectiveness in painful diabetic neuropathy or postherpetic neuralgia, but not in other painful neuropathic syndromes [95]. On the other hand, tapentadol is effective in treating low back pain with a neuropathic component [96]. Anticonvulsant agents other than α2δ ligands, such as oxcarbazepine, have produced only inconsistent results with respect to neuropathic pain [97]. In recent research, certain oral-mucosal cannabis treatments are effective in treating neuropathic pain, but evidence to date comes from smaller, short-term studies and may not be generalizable to all neuropathic pain patients [98]. Equivocal or mixed results have come from studies of selective serotonin reuptake inhibitors (SSRIs), mexiletine, topical clonidine, and N-methyl-Daspartate (NMDA) antagonists [79,80].
When evaluating a neuropathic pain patient, it is important to manage the expectations of the patient as well as to be realistic about treatment effectiveness. Complete pain relief is likely an unrealistic goal, and patients should be informed about this fact. A worthwhile clinical goal is >50% pain reduction, but this may not be achievable for all patients. Furthermore, pain regimens should help improve sleep, well-being, function, mood, and quality of life for the patient and must be seen as multidimensional. Pain reduction of 30-50% can be achieved in most neuropathy patients with evidence-guided pharmacologic therapy at maximal doses [99]. Patients should also be advised to consider functional treatment goals as well as just pain relief.
Neuropathic pain secondary to COVID
The symptoms of COVID are many and diverse. Certain prominent symptoms, such as fatigue, anosmia, dysgeusia, headache, vertigo, and myalgia, suggest a direct invasion of the nervous system [100]. Long-haul or long COVID is a recently reported postviral syndrome associated with a constellation of symptoms not necessarily the same as the symptoms the patient experienced with acute COVID [101]. In addition to neuropathic symptoms that may occur during acute or postviral COVID, the treatments to which patients are subjected may further contribute to certain neuropathic pain syndromes. The circulating SARS-CoV-2 virus may increase pro-inflammatory cytokine production (sometimes resulting in "cytokine storm") and directly invade the olfactory epithelium. Anosmia and ageusia occur with other viral infections but appear to be particularly prevalent among those with acute COVID infections [102]. COVID infection has resulted in an increased rate of neuropathic pain, which in itself is a predictor of neurological complications [100,103].
The pathophysiology of neuropathic pain is described in the literature. Neural lesions can trigger a massive influx of neurotransmitters at the spinal level, leading to intracellular molecular changes, and increase the numbers of certain receivers, such as NMDA, neurokinin-1, aminomethylphosphonic acid, and glutamate along a pathway that results in central sensitization. During a cytokine storm, the release of proinflammatory cytokines interleukin-1 (IL-1) and IL-6 increases, which, in turn, increases NGF production. The release of NGF increases localized effects at the Na channels and induces cyclo-oxygenase-2 (COX-2) and prostaglandin production. This promotes depolarization in the form of both antegrade and retrograde axonal transports, resulting in an increase in neuropeptides. The production of neuropeptides can lead to peripheral sensitization. The liberation of tumor necrosis factor (TNF) can also result in anterograde and retrograde axonal transports as well as an increase in the expression of bradykinin receptors and the release of certain neuropeptides, likewise triggering peripheral sensitization. Furthermore, both central and peripheral sensitization can form a feedback loop, intensifying each other [104,105]. In COVID patients, the neuropathic pain mechanisms relate to the host's response to the virus. Acute COVID infection increases IL-1, IL-6, and TNF-α, all of which stimulate nociceptors [103], and the presence of elevated levels of these particular cytokines have been suspected as associated with the development of neurological symptoms [106].
As with many viral infections, the tissue tropism of COVID requires accessible viral receptors and entry cofactors on the host cells. Neuropilin-1 (NRP1), a transmembrane receptor, appears to enhance the infectivity of the SARS-CoV-2 virus [102]. Thus, NRP1 can be viewed as a host factor for COVID and a potential therapeutic target [107]. There is some preclinical evidence that NRP1 may facilitate the entry of the virus into the brain via the olfactory epithelium [108]. Endothelial dysregulation appears to play a role in severe COVID infection and has been associated with vasoconstriction, vascular leaks, thrombosis, excessive inflammation, and disruption of the body's natural antiviral immune defenses [109]. It has been speculated that SARS-CoV-2 may bind to the angiotensin-converting enzyme-2 (ACE-2) receptors via the spike protein and, in this way, infect endothelial cells. The suspected downregulation of ACE-2 by the virus may lead to the pulmonary, circulatory, and other complications seen in severe COVID infection [110]. Comorbidities, which seem to exacerbate COVID symptoms, such as obesity and hypertension, involve underlying endothelial damage and dysfunction [111]. Endothelial dysfunction is a systemic condition in which the endothelium no longer promotes vasodilation, fibrinolysis, and anti-aggregation. The healthy endothelium prevents blood clotting by providing an antithrombotic surface, which is disrupted by endothelial inflammation. it appears that COVID infection can cause endothelial dysfunction and a hypercoagulable state [112]. Endothelitis with its hyperproduction of pro-inflammatory cytokines creates a hyperinflammatory state, which can cause the blood-brain barrier to rupture, leading to a cascade reaction of pro-inflammatory mediators and the intrusion of innate immune cells into the brain [113]. Acute COVID infection has been associated with numerous neurological complications, including viral encephalitis, encephalopathy, acute cerebrovascular disease, ischemic stroke, polyneuropathy, epileptic seizures, Guillain-Barré syndrome, and others [114].
Patients who enter intensive care units (ICU) for any reason may be subject to the post-ICU syndrome, a spectrum condition that may involve persistent cognitive deficits, weakness, intrusive memories, and pain [115]. Intensive care in and of itself can be associated with systemic neuropathy; prolonged time spent in the prone position has been associated with neuropraxia and severe axonal damage to the ulnar nerve, brachial plexus, and the nerves in the forearms [116]. It has been estimated that the prevalence of chronic pain one year after ICU discharge ranges from 14% to 77%. Despite this alarming statistic, there has been relatively little study of this population, which expanded greatly during the COVID pandemic [117,118]. Post-ICU syndrome involves chronic painful conditions, such as joint pain, muscle pain related to atrophy, polyneuropathy, and pain associated with the critical illness itself [118].
Much more research is needed to better understand the neurological ramifications of COVID. The healthcare system must anticipate that many COVID survivors will develop de novo neuropathic pain symptoms in the weeks or months following acute infection. COVID survivors who had pre-existing neuropathic painful conditions may experience a deterioration of their condition and exacerbation of their neuropathic pain. The presence of neuropathic pain in a COVID survivor is an indicator of potential neurological damage. Overall, the healthcare system will likely see an increase in neuropathic pain in the coming years [100]. The duration of neurological manifestations of COVID remains unknown.
Conclusions
Neuropathic pain remains both prevalent and challenging to treat. It often occurs in patients with neurological disorders, such as MS, diabetic peripheral neuropathy, postherpetic neuralgia, spinal cord injury, stroke, or other conditions. Prompt and accurate diagnosis is important for good outcomes, and multimodal pharmacologic regimens are effective. Acute COVID infection as well as the postviral syndrome ("long COVID") may have neuropathic symptoms that suggest neurological damage.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2014-10-20T00:00:00.000
|
969262
|
{
"extfieldsofstudy": [
"Psychology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2014.00951/pdf",
"pdf_hash": "afa0f0c24e695dc381edaaceb7985dbbcc4b5a8c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:709",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"sha1": "afa0f0c24e695dc381edaaceb7985dbbcc4b5a8c",
"year": 2014
}
|
pes2o/s2orc
|
Combined action observation and imagery facilitates corticospinal excitability
Observation and imagery of movement both activate similar brain regions to those involved in movement execution. As such, both are recommended as techniques for aiding the recovery of motor function following stroke. Traditionally, action observation and movement imagery (MI) have been considered as independent intervention techniques. Researchers have however begun to consider the possibility of combining the two techniques into a single intervention strategy. This study investigated the effect of combined action observation and MI on corticospinal excitability, in comparison to either observation or imagery alone. Single-pulse transcranial magnetic stimulation (TMS) was delivered to the hand representation of the left motor cortex during combined action observation and MI, passive observation (PO), or MI of right index finger abduction-adduction movements or control conditions. Motor evoked potentials (MEPs) were recorded from the first dorsal interosseous (FDI) and abductor digiti minimi (ADM) muscles of the right hand. The combined action observation and MI condition produced MEPs of larger amplitude than were obtained during PO and control conditions. This effect was only present in the FDI muscle, indicating the facilitation of corticospinal excitability during the combined condition was specific to the muscles involved in the observed/imagined task. These findings have implications for stroke rehabilitation, where combined action observation and MI interventions may prove to be more effective than observation or imagery alone.
INTRODUCTION
Research using neuroimaging techniques (e.g., Grèzes and Decety, 2001;Filimon et al., 2007;Munzert et al., 2008) has indicated that several cortical areas shown to be active during movement execution are also active during the action observation and imagery of movement. These areas include the dorsal pre-motor cortex, primary motor cortex, supplementary motor area, superior parietal lobe, intraparietal sulcus, and cerebellum. Therefore, when physical movement is not possible, as in the case of stroke or other brain injury, action observation and imagery may provide useful techniques for maintaining activity in motor regions of the brain, and so assist in the recovery of motor functioning (Sharma et al., 2006;de Vries and Mulder, 2007;Ertelt et al., 2007;Holmes and Ewan, 2007;Mulder, 2007). As such, considerable research attention has been devoted to understanding the effects of action observation and imagery on the human motor system and establishing techniques for best utilizing action observation and imagery in rehabilitation settings.
One method that has been used to investigate the effects of action observation and imagery independently on the human motor system is transcranial magnetic stimulation (TMS). When TMS is applied to the primary motor cortex, motor evoked potentials (MEPs) are produced in the corresponding muscles; the amplitude of which provides a marker of corticospinal excitability at the time of simulation (Rothwell, 1997;Petersen et al., 2003;Naish et al., 2014). Research into action observation indicates that single-pulse TMS delivered to participants' motor cortex during observation of human movements produces MEPs of larger amplitude than those obtained under control conditions (e.g., Fadiga et al., 1995;Strafella and Paus, 2000;Patuzzo et al., 2003;Borroni et al., 2005;Aglioti et al., 2008;Loporto et al., 2012). This indicates that passive observation (PO) of hand and arm movements can facilitate corticospinal excitability. A similar effect also occurs during imagery of human movements, where the amplitude of MEPs obtained during imagery are larger than those obtained under control conditions (e.g., Kasai et al., 1997;Fadiga et al., 1999;Hashimoto and Rothwell, 1999;Rossini et al., 1999;Facchini et al., 2002). Stinear et al. (2006), however, have reported that this effect is only present when participants engage in kinesthetic imagery, but not visual imagery.
As both action observation and imagery have been shown to facilitate corticospinal excitability, albeit through partially different neural mechanisms, several researchers have compared the facilitation effects of action observation and imagery in an attempt to establish which may be the more effective technique. For example, Clark et al. (2004) used TMS to stimulate the motor cortex representation for the right hand muscles during observation, imagery, and physical imitation of simple hand movements. In comparison to a resting control condition, both action observation and imagery produced a corticospinal facilitation effect, but there was no difference in the extent of the facilitation between the two experimental conditions. This effect has since been replicated consistently in the literature (e.g., Léonard and Tremblay, 2007;Roosink and Zijdewind, 2010;Williams et al., 2012), indicating that action observation and imagery facilitate corticospinal excitability to a similar extent.
Action observation and imagery have, therefore, traditionally been viewed as separate intervention techniques. Researchers have either studied the effects of action observation or imagery in isolation, or compared the effects of the two techniques against each other. More recently, it has been proposed that action observation and imagery should be viewed as complementary, rather than competing, interventions (Holmes and Calmels, 2008). Indeed, Vogt et al. (2013) have suggested that it is possible for humans to observe a movement whilst concurrently imagining that they are performing that same movement; a process they term "congruent action observation-motor imagery". Given that both action observation and imagery activate the motor system when performed in isolation, it is logical to assume that combining the two techniques may activate the motor system to a greater extent. Recent fMRI and EEG research would support this assertion (e.g., Macuga and Frey, 2012;Nedelko et al., 2012;Berends et al., 2013;Villiger et al., 2013, for a review see Vogt et al., 2013). Collectively, this body of research has revealed that, compared to PO, concurrent action observation and imagery of a variety of congruent movement tasks produces stronger activation in several movement-related brain regions.
Single-pulse TMS has also been used to explore the effects of combined action observation and imagery on corticospinal excitability. For example, Sakamoto et al. (2009) stimulated the left motor cortex representation for the biceps brachii muscle whilst participants: (i) observed passively a bicep curl action; (ii) imagined performing a bicep curl action; or (iii) observed a bicep curl action whilst simultaneously imagining that they were performing that same action. The amplitude of MEP responses in these three conditions were compared to those obtained from a control condition, involving passive observation of a fixation cross. Both imagery alone and the combined action observation and imagery conditions produced larger amplitude MEPs than the control condition, in contrast to the PO condition. Importantly, the authors also reported that the combined action observation and imagery condition produced larger amplitude MEPs than either action observation or imagery conditions alone. Similar findings have also been reported by Ohno et al. (2011) and Tsukazaki et al. (2012) for combined observation and imagery of chopstick use and three-ball juggling in novices, respectively. Based on these findings, the authors suggested that combining action observation and imagery into a single intervention strategy may be more effective for aiding recovery of motor function in patients than either action observation or imagery alone. This argument is supported by the recent behavioral evidence provided by Eaves et al. (2014), which indicates that engaging in combined observation and imagery can facilitate subsequent motor execution.
Although all three combined action observation and imagery experiments that have been published to date using TMS have demonstrated that combined action observation and imagery produces larger amplitude MEPs than either action observation or imagery alone (e.g., Sakamoto et al., 2009;Ohno et al., 2011;Tsukazaki et al., 2012), the experiments were limited by a number of methodological factors. First, these experiments all used observation of a fixation cross or a blank screen as the control condition against which to compare MEP amplitudes obtained in the action observation and imagery conditions. Use of such a control condition is problematic in that it makes the interpretation of the corticospinal facilitation effect difficult (Loporto et al., 2011). Loporto et al. (2011) argued that by using a fixation cross or blank screen as the only control condition in TMS action observation and imagery experiments, researchers are unable to attribute accurately any facilitation effect to the specific observation and/or imagery task. For example, any facilitation effect found for action observation in comparison to a fixation cross or blank screen control, may be due to the presence of movement in the experimental condition rather than the specific observation of task-related human movement. Equally, facilitation effects obtained during imagery, in comparison to a fixation cross or blank screen control, may be due to participants engaging in any form of cognitive activity, rather than specific imagery of human movement. Taken together, it is important to conduct similar experiments for combined action observation and imagery whilst employing more rigorous control conditions, in order to ascribe accurately this effect to the experimental manipulation.
Further, in the reported combined action observation and imagery TMS studies (i.e., Sakamoto et al., 2009;Ohno et al., 2011;Tsukazaki et al., 2012) the ordering of trials was randomized by experimental condition across the experiment. Although such a randomization procedure is common in typical TMS action observation and imagery research, we argue that to do so in a combined action observation and imagery experiment is problematic. The video stimulus provided to participants is, typically, identical in the PO and combined action observation and imagery conditions. The only difference between the two conditions is the instructional content that accompanies the video (i.e., "Observe the video" or "Imagine yourself performing the action as you observe it"). By randomizing the trials for each condition throughout the experiment, researchers are unable to ensure that the effects of the instructions given for one condition do not influence participants' behavior on other conditions. Specifically, once participants have been told to imagine themselves performing the action as they observe it, it is difficult to be certain that they are not engaging in the more covert behavior when taking part in subsequent PO trials. The instructional content that accompanies action observation videos has been shown to modulate corticospinal excitability (Roosink and Zijdewind, 2010) and, as such, this may have confounded the results of these three studies (Naish et al., 2014). Presenting the trials as blocks, in Frontiers in Human Neuroscience www.frontiersin.org November 2014 | Volume 8 | Article 951 | 2 a set order so that the combined action observation and imagery trials occur after PO trials can control for this issue. It is common in TMS action observation and imagery research to record MEPs from a control muscle not involved in the execution of the observed/imagined action. The inclusion of a control muscle provides greater efficacy for facilitation effects being specific to the muscles involved in the execution of the observed/imagined action (e.g., Fadiga et al., 1995Fadiga et al., , 1999. None of the three combined action observation and imagery experiments published to date that have used TMS have included a control muscle against which to compare facilitation effects for the primary muscle of interest. As such, it is currently unknown whether such a muscle-specific facilitation effect would occur in a combined action observation and imagery condition. The aims of this study were, therefore, to: (i) determine whether combined action observation and imagery of human movement would facilitate corticospinal excitability to a greater extent than either PO or imagery alone; and (ii) establish whether any corticospinal facilitation effect obtained during combined action observation and imagery of human movement was specific to those muscles involved in the performance of the observed/imagined movement. It was hypothesized that: (i) PO alone, imagery alone and combined action observation and imagery would all produce a corticospinal facilitation effect; (ii) combined action observation and imagery would produce a greater corticospinal facilitation effect than either PO alone or imagery alone; and (iii) such corticospinal facilitation effects would only be present in the muscles involved in the observed and/or imagined action.
PARTICIPANTS
Nineteen healthy volunteers (nine females) aged 18-45 years (mean age 26.8 years) participated in the experiment. All participants gave their written informed consent to take part and were naïve to the purpose of the experiment. The TMS Adult Safety Screen (Keel et al., 2001) was used to identify any participants who may have been predisposed to possible adverse effects of the stimulation. No participants were excluded from the study based on their questionnaire responses and no discomfort or adverse effects to the stimulation were reported. All participants were right-handed as assessed by the Edinburgh Handedness Inventory (Oldfield, 1971). The protocol for the experiment was approved by the local university ethics committee and the experiment was conducted in accordance with the Declaration of Helsinki (2013).
QUESTIONNAIRE MEASURE
Prior to participating in the experiment, participants completed the Vividness of Movement Imagery Questionnaire-2 (VMIQ-2; Roberts et al., 2008) to provide a marker of their imagery vividness. This 36-item questionnaire requires participants to imagine themselves performing different movements from internal, external, and kinesthetic perspectives. Participants rate the clarity of the images that they generate on a five-point Likert scale, with responses ranging from 1 (perfectly clear and vivid image) to 5 (no image at all). Lower scores on the VMIQ-2 therefore indicate that participants can generate clear and vivid images. Roberts et al. (2008) reported all three scales to be reliable, observing alpha coefficients of 0.95, 0.95 and 0.93 for the external, internal and kinesthetic scales, respectively.
ELECTROMYOGRAPHIC RECORDINGS
Electromyographic (EMG) recordings were collected from the first dorsal interosseous (FDI) and abductor digiti minimi (ADM) muscles of the right hand using bipolar, single differential, surface EMG electrodes (DE-2.1, Delsys Inc, Boston, MA). The electrodes comprised two 10 mm × 1 mm silver bar strips, spaced 10 mm apart. The EMG was recorded with a sampling rate of 2 kHz, bandwidth 20 Hz to 450 kHz, 92 dB common mode rejection ratio, and >10 15 Ω input impedance. All electrode sites were cleaned with alcohol swabs prior to electrode attachment. The electrodes were placed over the mid-point of the belly of the muscles and a reference electrode was placed over the ulnar process of the right wrist. The EMG signal was recorded using Spike 2 version 6 software (Cambridge Electronic Design (CED), Cambridge), received by a Micro 1401+ analog-digital converter (CED).
TRANSCRANIAL MAGNETIC STIMULATION
TMS was performed with a figure-of-eight coil (mean diameter of 70 mm) connected to a Magstim 200 2 magnetic stimulator (Magstim Co., Whitland, Dyfed, UK) which delivered monophasic pulses with a maximum field strength of 2.2 Tesla. The coil was held in a fixed position, using a mechanical arm, over the left motor cortex. The coil was orientated so that the flow of induced current in the brain traveled in a posterior-anterior direction, perpendicular to the central sulcus; the optimal orientation for achieving indirect trans-synaptic activation (Brasil-Neto et al., 1992). The optimal scalp position (OSP) was identified as the scalp site which produced MEPs of the largest amplitude from the right FDI muscle, whilst also eliciting consistent MEPs from the ADM muscle, using a stimulation intensity of 60% maximum stimulator output. The process of stimulating the OSP for the primary muscle of interest and recording MEPs from more than one muscle is common in TMS action observation and imagery research (Naish et al., 2014). The use of 60% maximum stimulator output as the intensity for locating the OSP is also common in research of this nature (e.g., Clark et al., 2004;Loporto et al., 2012;Williams et al., 2012) and is appropriate as it produces large, short-latency MEPs in most individuals. Participants wore a tightly-fitting polyester cap on their head on which the OSP was marked to ensure a constant coil positioning throughout the experiment. The stimulation intensity was then reduced or increased until the resting motor threshold (RMT) was determined. RMT was determined using the MEP amplitudes obtained from the FDI muscle and was defined as the minimum stimulation intensity that elicited peak-to-peak MEP amplitudes greater than 50 µv in at least 5 out of 10 trials (Rossini et al., 1994). As Loporto et al. (2013) demonstrated that facilitation of corticospinal excitability during action observation was only evident following low-intensity TMS, the experiment was conducted at a stimulation intensity of 110% RMT, thereby reducing the chance of direct wave stimulation more frequently seen at higher stimulation intensities.
EXPERIMENTAL PROCEDURES
Participants were seated in a dimly illuminated room in a comfortable chair with their elbows flexed at 90 • and their hands placed in a relaxed position on a table in front of them. The participants' head rested on a chin and head rest to restrict movement. A 37 inch Panasonic LCD television screen (resolution, 1024 × 768 pixels; refresh frequency, 60 Hz) was positioned at a distance of 40 inches from the participant. Participants were requested to refrain from any voluntary movement and to attend to the stimuli presented on the television screen. Blackout curtains ran along either side of the table and behind the screen to eliminate any distractive visual stimuli in the room. Participants took part in six different conditions (three experimental and three control conditions). The three experimental conditions were termed PO, Movement Imagery (MI), and Combined Action Observation and Movement Imagery (AO+MI). The PO condition showed the dorsal view of a hand in prone position performing six abductions of the index finger at a frequency of 1.33 Hz and participants were instructed to watch the videos. In the MI condition, participants were presented with a blank screen and were instructed to imagine that they were performing index finger abduction movements in time with an auditory metronome at a frequency of 1.33 Hz. In this condition participants were instructed to focus specifically on kinesthetic imagery (i.e., imagining the physiological sensations associated with executing the index finger abduction movement), as this type of imagery has been shown to modulate corticospinal excitability to a greater extent than visual imagery alone (Stinear et al., 2006). In the AO+MI condition, participants observed identical videos to those used in the PO condition, but were instructed to imagine that they were performing the movement as they observed it. As in the MI condition, participants were again instructed to use kinesthetic imagery. In the PO and AO+MI conditions, participants observed the movement being performed by both male and female hands, irrespective of their own sex. The three control conditions were termed Static Hand (SH), Movement Observation (MO), and Backwards Counting (BC). In the SH condition participants were shown the dorsal view of a hand resting in a prone position and instructed to watch the video. In the MO condition participants were instructed to watch a video of pendulum swinging at 1.33 Hz, mimicking the motion of the index finger in the PO and AO+MI conditions. In the BC condition participants observed a blank screen (as in the MI condition), but were instructed to complete a task of counting backwards mentally from a random number, in time with an auditory metronome at 1.33 Hz. All videos were of nine-second duration.
EXPERIMENTAL PROTOCOL
Participants observed six blocks of trials, with each block containing sixteen videos of the same condition (see Figure 1). The blocks were presented in a semi-random order, where the SH block was always presented before the PO block, the PO block was always presented before the MI block, and the AO+MI block was always presented after both the PO and MI blocks. The purpose of this was to prevent participants from engaging in combined imagery and observation during PO trials or engaging in imagery during SH trials, that could have resulted from having been previously exposed to these experimental conditions. Prior to each block of trials, TMS was delivered during eight pre-block control videos of a blank screen with a fixation cross in order to control for any coil movement between blocks. A single TMS pulse was applied during each video over the OSP at either 3500 or 8000 ms after video onsets. These timings corresponded to the point of maximal abduction in the PO and AO+MI videos. The variation in the onset of the TMS pulse was to remove the predictability of the stimulus. Two-minute rest periods were provided between blocks.
DATA ANALYSIS
A pre-stimulus recording of 200 ms was used to check for the presence of EMG activity before the TMS pulse was delivered. Individual trials in which the peak-to-peak amplitude of the baseline EMG activity was 2.5 SD higher than the mean baseline EMG activity of each participant were discarded from further analysis (e.g., Loporto et al., 2012Loporto et al., , 2013 since it may have influenced the amplitude of the subsequent MEP. This resulted in 3.4% of trials being discarded from the FDI muscle and 2% of trials being discarded from the ADM muscle. Due to the nature of the study trials could not be fully randomized across blocks, since the AO+MI videos needed to be presented after the PO videos to prevent participants from engaging in combined imagery and observation during the PO trials. Therefore a 2 (muscle) × 6 (block) repeated measures ANOVA was performed to ensure that there was no change in preblock (fixation cross) data throughout the experiment to account for any possible coil movement across the conditions that may have affected the MEP results.
The peak-to-peak MEP amplitude was measured from each individual trial and the mean MEP amplitude was calculated for each condition. Due to the large inter-participant variability in absolute MEP amplitudes, these data were normalized using the z-score transformation (e.g., Fadiga et al., 1995;Loporto et al., 2012). The normalized MEP amplitudes recorded from both muscles were analyzed using a repeated measures ANOVA, with main factors of muscle (FDI, ADM), and video (SH, PO, MI, AO, BC, MO). Post hoc analyses with the Sidak adjustment were applied where necessary. The level of statistical significance for all analyses was set to α = 0.05. Effect sizes are reported as partial eta squared (η 2 ρ ).
VMIQ-2 QUESTIONNAIRE
Participants' responses to the VMIQ-2 questionnaire revealed mean scores of 28.74 (±13.51) for external visual imagery, 22.26 (±8.22) for internal visual imagery, and 26 (±9.27) for kinesthetic imagery. This indicates that all participants reported being able to generate "reasonably clear and vivid" imagery for all three subscales of the questionnaire.
PRE-BLOCK FIXATION CROSS DATA
The results of the 2 (muscle) × 6 (block) repeated measures ANOVA performed on the pre-block (fixation cross) data showed no significant main effects for muscle F (1,18) = 1.55, p = 0.23, η 2 ρ = 0.08 or block F (5,90) = 0.88, p = 0.50, η 2 ρ = 0.05. In addition, there was no significant muscle × block interaction effect F (5,90) = 1.02, p = 0.41, η 2 ρ = 0.05. This confirmed that any MEP amplitude differences found between experimental blocks could be attributed to the video condition presented to the participants, rather than due to any significant coil movement or attentional fatigue across the experiment that may have affected the MEP results.
MAIN EXPERIMENT DATA
The repeated measures ANOVA revealed a significant muscle × video interaction effect F (5,90) = 4.32, p = 0.001, η 2 ρ = 0.19 (see Figure 2). Pairwise comparisons showed MEP amplitudes recorded from the FDI muscle during AO+MI were significantly higher than PO (p = 0.04) and all three control conditions (all p < 0.05). There was no significant difference between AO+MI and MI (p = 0.15). MEP amplitudes recorded from the FDI muscle during MI were significantly higher than during the control conditions of SH (p = 0.01) and MO (p = 0.05). There was no significant difference between MI and PO (p = 0.45) and MI and BC (p = 0.44). In addition, there was no difference between MEP amplitudes obtained during PO in comparison to all three control conditions, although the difference between PO and SH approached significance (p = 0.07). No other pairwise comparisons were significant (all p > 0.05).
Pairwise comparisons showed MEP amplitudes recorded from the ADM during BC were significantly higher than SH (p = 0.01), PO (p = 0.007), and AO+MI (p = 0.03). No other significant differences were found (all p > 0.05).
DISCUSSION
The primary aim of this experiment was to establish whether combined action observation and imagery of human movement would facilitate corticospinal excitability, and whether such an effect would be greater than that which occurs during either PO or MI alone. The secondary aim was to determine whether any such corticospinal facilitation was specific to the muscles involved in the observed/imagined action. This section will first discuss the current findings in relation to the effects of combined action observation and MI on corticospinal excitability. This will be followed by a discussion of the findings related to PO alone and MI alone, before finally discussing the findings reported in the ADM muscle.
FACILITATION OF CORTICOSPINAL EXCITABILITY DURING COMBINED ACTION OBSERVATION AND MOVEMENT IMAGERY (AO+MI)
Combined action observation and movement imagery (AO+MI) of simple index finger movements produced larger amplitude MEPs in the FDI muscle than were obtained from control conditions of observing a SH, observing movement of an inanimate object (MO), and counting backwards mentally (BC). The combined action observation and imagery condition also produced MEPs of larger amplitude than passive observation alone (PO). Changes in MEP amplitude represent modulation of corticospinal excitability (Rothwell, 1997;Petersen et al., 2003;Naish et al., 2014). The results therefore indicate that combined action observation and imagery of simple human movements can facilitate corticospinal excitability, and the extent of this facilitation is greater than occurs during PO alone. This finding is consistent with our hypothesis and previous research into the effects of combined action observation and imagery on corticospinal excitability (e.g., Sakamoto et al., 2009;Ohno et al., 2011;Tsukazaki et al., 2012). This facilitation effect during AO+MI was, however, only evident in the FDI muscle, and not the ADM muscle. The FDI muscle is the prime mover in index finger abduction, whilst the ADM is not involved in the execution of this movement. The results, therefore, indicate that the corticospinal facilitation effect during combined AO+MI is specific to the muscles involved in executing the observed/imagined task. Although this effect has been reported in previous action observation (e.g., Fadiga et al., 1995) and imagery (e.g., Fadiga et al., 1999) studies using TMS, to the best our knowledge this study is the first to report such effects in a combined AO+MI condition. Facilitation of corticospinal excitability during AO+MI may be indicative of activity within the human mirror neuron system. This system, comprising a network of brain regions including the premotor cortex and inferior parietal lobule (Rizzolatti and Craighero, 2004), is activated during both physical movement execution and by observation and imagery of the same action (Rizzolatti, 2005). Although the motor cortex, stimulated in the current experiment, is external to this network of brain regions, Fadiga et al. (2005) proposed that strong cortico-cortical connections link the premotor and motor cortices. It is, therefore, generally accepted that the facilitation of corticospinal excitability during action observation or MI is reflective of increased activity in premotor brain regions that connect to the primary motor cortex (Fadiga et al., 2005). As similar parts of the premotor cortex are activated when observation or imagery are performed in isolation (e.g., Grèzes and Decety, 2001;Filimon et al., 2007;Munzert et al., 2008), engaging concurrent AO+MI may result in stronger activity in these regions (e.g., Macuga and Frey, 2012;Nedelko et al., 2012;Villiger et al., 2013). This may explain the greater facilitation of corticospinal excitability reported for combined AO+MI, compared to PO alone.
Although combined AO+MI facilitated corticospinal excitability to a greater extent than PO, no effect was found in comparison to MI alone. Figure 2 indicates that whilst MEP amplitudes in the combined AO+MI condition appeared to be larger than those obtained in the MI condition, the difference was not significant. This finding conflicts with our hypothesis and previous TMS research which has compared the effects of combined AO+MI against MI alone (e.g., Sakamoto et al., 2009;Tsukazaki et al., 2012). One possible explanation for this inconsistency could be related to discrepancies between the more detailed imagery instructions provided to participants in the current study compared to those offered in previous experiments. Since Stinear et al. (2006) have demonstrated that kinesthetic imagery is more effective in facilitating corticospinal excitability than visual imagery, we instructed participants to focus specifically on "imagining the physiological sensations associated with execution of the index finger abduction movement". Kinesthetic aspects of imagery, however, were not emphasized in the studies conducted by Sakamoto et al. (2009) andTsukazaki et al. (2012). For example, Sakamoto et al. told participants to "imagine flexing and extending their elbow", whilst Tsukazaki et al. told participants to "imagine that they were performing three-ball juggling by mirroring what they saw in the video clips". It is possible that the instruction to focus on kinesthetic imagery could have enhanced the amplitude of MEPs that we recorded during MI and, as such, contributed to the lack of significant difference in MEP amplitude between combined AO+MI and MI alone. Further controlled work on instructional sets as important mediators of MEP response is clearly warranted.
An alternative explanation for the lack of a significant difference between combined AO+MI and MI alone could be related to the imagery abilities of the participants in the different studies. Williams et al. (2012) correlated MEP amplitudes obtained during imagery of finger-thumb opposition movements with selfreported imagery vividness scores, as measured by the VMIQ-2. They demonstrated that larger amplitude MEPs were associated with a greater kinesthetic imagery vividness. The participants in the current study were all competent imagers, having reported being able to generate "reasonably clear and vivid" images on all sub-scales of the VMIQ-2. Sakamoto et al. (2009) did not report any imagery ability values for participants in their study, whilst the novice jugglers in the study by Tsukazaki et al. (2012) appeared to have a moderate imagery vividness, as measured by a simple selfreport measure. It is possible that the participants recruited for this study were more competent imagers than those recruited by Sakamoto et al. and Tsukazaki et al. The possible superior imagery vividness of our participants may have increased MEP amplitudes obtained during MI alone and thus contributed to the lack of difference between combined AO+MI and MI alone conditions. This proposal highlights the importance for researchers to report their participants' imagery ability characteristics to control for this potentially confounding variable that could inflate MEP contrasts for poor imagers.
FACILITATION OF CORTICOSPINAL EXCITABILITY DURING PASSIVE OBSERVATION (PO)
It is commonly reported that PO of human movement facilitates corticospinal excitability compared to control conditions (e.g., Fadiga et al., 1995;Strafella and Paus, 2000;Patuzzo et al., 2003;Borroni et al., 2005;Aglioti et al., 2008;Loporto et al., 2012). Despite a trend for this effect (PO > SH; p = 0.07), the results of this study do not fully support previous work as PO did not produce MEPs of significantly larger amplitude than the control conditions. This may relate, in part, to the instructions provided to direct participants' attention to the observation video. The instructions that accompany action observation conditions in TMS research are typically vague and are usually not reported in detail. It is interesting to note, however, that where studies have compared the effects of different instructions during action observation directly, they have often failed to detect a facilitation effect during PO conditions. For example, several researchers have reported that instructing participants to observe an action and simultaneously imagine performing that action facilitates corticospinal excitability, but instructions to only observe an action do not (Sakamoto et al., 2009;Ohno et al., 2011;Tsukazaki et al., 2012). In addition, Roosink and Zijdewind (2010) demonstrated that instructing participants to observe an action with the intention to imitate it later produced MEPs of larger amplitude than when participants were instructed to simply observe an action. These findings are also supported by fMRI research indicating greater activity, compared to PO, in movement-related brain regions when observation and imagery occur simultaneously (e.g., Macuga and Frey, 2012;Nedelko et al., 2012;Villiger et al., 2013) or when actions are observed with the intention of future imitation (e.g., Grèzes et al., 1999;Buccino et al., 2004;Frey and Gerry, 2006). The instructions provided to participants seem to play a crucial role in modulating activity of the motor system during action observation (Naish et al., 2014). Therefore, it is possible that, in some cases, PO alone is not sufficient to enhance corticospinal excitability above resting levels. As such, supplementing PO with additional instructions may be more appropriate in motor rehabilitation settings than only instructing patients to observe a video. Based on the results of this study, and the behavioral evidence provided by Eaves et al. (2014), providing additional instructions for participants to imagine performing the action as they observe it would also appear to be a promising option. Further research should investigate this possibility further by comparing the effects on corticospinal excitability of different types of instructions during observation (e.g., observe and imagine, observe to imitate) against PO.
FACILITATION OF CORTICOSPINAL EXCITABILITY DURING MOVEMENT IMAGERY (MI)
Research investigating the effects of MI on corticospinal excitability has shown that imagery of human movement elicits MEPs of larger amplitude than control conditions (e.g., Kasai et al., 1997;Fadiga et al., 1999;Hashimoto and Rothwell, 1999;Rossini et al., 1999;Facchini et al., 2002). The amplitudes of MEPs recorded during imagery, however, do not typically differ from those obtained during PO (e.g., Clark et al., 2004;Léonard and Tremblay, 2007;Roosink and Zijdewind, 2010;Williams et al., 2012). The results of this experiment are consistent with these findings. Despite this, it is important to note that MI did not produce MEPs of larger amplitude than the BC control condition. In previous research, MEP amplitudes obtained during MI have typically been compared to resting MEP values. This comparison, however, does not allow researchers to attribute the facilitation to imagery of human movement per se, as the effect may be due to the presence of cognitive activity in the imagery condition. The BC condition was included to address this issue by allowing a comparison to be made between movement-related and nonmovement-related cognitive activity. As there was no difference between these two cognitive conditions, it could be argued that the current results do not represent a true corticospinal facilitation effect for MI. Interestingly, Clark et al. (2004) also included a BC condition in their comparison of MEP amplitudes between observation and imagery. Consistent with our findings, they reported that the MEPs obtained during BC were not significantly different to those obtained during imagery or observation. As such, they concluded that part of the facilitation recorded during imagery and observation may be due to attentional processing. The findings reported in both the current study and by Clark et al. indicate that neither PO or MI facilitated corticospinal excitability to a greater extent than a simple non-motor cognitive task. This, therefore, adds weight to the claim that combined AO+MI may be more effective in motor rehabilitation settings than either PO or imagery alone (e.g., Sakamoto et al., 2009;Ohno et al., 2011;Vogt et al., 2013), as combined AO+MI was the only experimental condition to facilitate corticospinal excitability to a greater extent than all three control conditions.
FACILITATION OF CORTICOSPINAL EXCITABILITY IN THE ADM MUSCLE
A final point for discussion relates to the findings reported in the ADM muscle. The ADM is not involved in the execution of the experimental task, and so no significant differences between any conditions were expected in this muscle. The amplitude of MEPs recorded during the BC condition were, however, larger than those obtained in SH, PO, and combined AO+MI conditions. This finding can be explained by research indicating a link between counting and hand motor areas. Andres et al. (2007) applied single-pulse TMS to the right hand representation of the motor cortex during counting tasks and a color-recognition control task. They obtained MEPs of larger amplitude during counting conditions, compared to the control task. In a subsequent experiment, they demonstrated that this effect was specific to the hand muscles, as similar findings were not obtained when arm and foot muscles were stimulated during counting. The authors suggested that the explanation for this finding may relate to finger movements playing a crucial role in learning to count during childhood. As a result of this developmental process, hand motor circuits may assist counting in adults by monitoring the relationship between different digits in a series (Andres et al., 2008). The BC condition may therefore have induced, either consciously or sub-consciously, imagined finger movements in the form of "finger counting". This activity would likely involve the ADM muscle, which may explain why MEP amplitudes were facilitated in this condition. Despite this explanation, it remains unclear why this effect was not evident in the FDI muscle during the BC condition. It is possible, however, that any effects in the FDI were dwarfed by the muscle-specific facilitation effect obtained during observation/imagery of the index finger abduction movement. This link between counting and motor areas may also provide an additional explanation for the lack of difference between MI and BC in FDI muscle, discussed above.
LIMITATIONS
The results of the current experiment provide convincing evidence that combined action observation and MI facilitates corticospinal excitability, but it is important to acknowledge several limitations to the experiment. First, as experimental conditions were presented in a specific order (i.e., SH, then PO, then MI, then AO+MI), participants may have been more familiar with the observed action when they completed the AO+MI condition, compared to when they completed the PO condition. The increased familiarity with the observed movement may, possibly, have contributed to the increased MEP amplitude in the combined condition. However, presenting the conditions in this order was essential in order to discourage participants from engaging in AO+MI during the PO condition.
Second, we cannot confirm that participants did not engage in AO+MI during PO conditions, despite the order of the conditions being structured in an attempt to prevent this. This is a recognized problem in action observation and imagery experiments, as researchers can never be certain that participants complete the conditions exactly as instructed. However, the significant difference between AO+MI and PO conditions indicates that imagery during PO trials is unlikely to have occurred in the current study.
Third, in the MI condition, participants completed their imagery in time with an auditory metronome. The purpose of this was to ensure that the timing of participants imagined finger movements was consistent with the timing of the observed movements in the PO and AO+MI conditions. The auditory metronome was also included in the BC condition as a control. This may be problematic as processing an auditory beat has been shown to activate motor regions in the brain (e.g., Grahn and Brett, 2007). As such, auditory processing, introduced by the presence of the metronome, may have influenced the amplitude of the MEP in the MI and BC conditions. This may account for the lack of significant difference between these conditions. Taken together, however, the inclusion of the metronome was unavoidable given the need to deliver TMS at consistent timings in the imagery and observation conditions.
SUMMARY
The results presented here have relevance for rehabilitation programs seeking to promote recovery of motor functioning in patients. In stroke rehabilitation settings, PO and MI are both advocated to be beneficial intervention techniques as they can maintain activity in the motor regions of the brain when physical movement is limited or not possible (Sharma et al., 2006;de Vries and Mulder, 2007;Ertelt et al., 2007;Holmes and Ewan, 2007;Mulder, 2007). In the current study, the combined AO+MI condition produced MEPs of larger amplitude than PO, and was the only experimental condition to facilitate corticospinal excitability to a greater extent than all three control conditions. The results therefore indicate that combining observation and imagery techniques into a single intervention strategy may prove to be a more effective tool in rehabilitation settings than use of either technique in isolation.
|
v3-fos-license
|
2023-02-26T14:08:11.774Z
|
2022-05-09T00:00:00.000
|
257186675
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://satellite-navigation.springeropen.com/track/pdf/10.1186/s43020-022-00070-6",
"pdf_hash": "798a8f5a2a0b62335b72949da60a6f970b7ffdc0",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:710",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"sha1": "798a8f5a2a0b62335b72949da60a6f970b7ffdc0",
"year": 2022
}
|
pes2o/s2orc
|
Satellite integrity monitoring for satellite-based augmentation system: an improved covariance-based method
Satellite integrity monitoring is vital to satellite-based augmentation systems, and can provide the confidence of the differential corrections for each monitored satellite satisfying the stringent safety-of-life requirements. Satellite integrity information includes the user differential range error and the clock-ephemeris covariance which are used to deduce integrity probability. However, the existing direct statistic methods suffer from a low integrity bounding percentage. To address this problem, we develop an improved covariance-based method to determine satellite integrity information and evaluate its performance in the range domain and position domain. Compared with the direct statistic method, the integrity bounding percentage is improved by 24.91% and the availability by 5.63%. Compared with the covariance-based method, the convergence rate for the user differential range error is improved by 8.04%. The proposed method is useful for the satellite integrity monitoring of a satellite-based augmentation system.
Introduction
Satellite-Based Augmentation System (SBAS) provides the differential corrections and integrity information to Global Navigation Satellite System (GNSS) users or SBAS users, enhancing the accuracy and integrity of GNSS services. With the signals of SBAS satellites used for ranging, SBAS also enhances the continuity and availability of GNSS (Meng & Hsu, 2021;SC-159, 2016), as shown in Fig. 1.
To guarantee flight safety, the integrity information of GNSS navigation signals shall be determined for aviation users. The integrity information includes the User Differential Range Error (UDRE) and the clock-ephemeris covariance matrix. Their stringent safety-of-life requirements are described in the document (SC-159, 2016). Specifically, the integrity information denotes the uncertainty of satellite corrections. It sets the tolerances of the SBAS correction errors, further the user's Horizontal Protection Level (HPL) and Vertical Protection Level (VPL) (Lu et al., 2021).
How to determine the satellite integrity information satisfying the requirement of flight safety is always a topic of SBAS. The existing satellite integrity algorithms are mainly divided into the Covariance-Based (CB) methods and the Direct Statistic (DS) methods.
The covariance-based methods have attracted a great attention for many years. Tsai (1999) adopted the weighted least squares to estimate the UDRE with the covariance matrix for the combined ephemeris and clock errors. But Shao (2012) demonstrated that sometimes the UDRE deduced with the Tsai's method cannot bound the residual errors. Walter, et al. (2001) first put forward a new message named Message Type 28 (MT28) which contains a relative clock and ephemeris covariance matrix for individual satellites. The covariance matrix is a location-specific modifier used to adjust the broadcast UDRE values as a function of the user's position (SC-159, 2016). From this matrix users can reconstruct their location-specific error bound rather than applying the largest bound in the service volume, improving the availability within the service volume and the integrity outside the service volume (Walter et al., 2001). Wu and Peck (2002) proposed two methods with the consideration of false alert rate and missed detection probability to construct shape covariance and used them to find the best covariance with the prototype software of the Wide Area Augmentation System (WAAS). The computational time of the proposed method is longer than that provided by the method at the time. Blanch et al. (2012) proposed a complex algorithm to compute the error bounds of the clock and ephemeris for dual frequency SBAS with service volume analysis tool MAAST (MATLAB Algorithm Availability Simulation Tool). Shao et al. (2011a) analyzed the projection of the clock-ephemeris covariance in the direction of pseudorange, and developed an analytical method to solve for UDRE by simulation data, but did not provide the service performance (Shao, 2012). Chen et al. (2018) put forward a pseudorange residuals-based method to compute the clock-ephemeris covariance for dual-frequency multi-constellation SBAS, neglecting the effects of fat tails.
The direct statistic methods are discussed several times. Chen (2001) gave the formula to compute UDRE and showed the initial results. Also, Li (2018) gathered the statistics of range errors from clock-ephemeris to determine UDRE which can only bound the range errors from clock-ephemeris at the probability of 75%, not satisfying the integrity requirement .
Obviously, the direct statistic method is not a proper method to create an integrity bound as it will never accumulate enough independent samples in any feasible time frame. The residual errors shall be overbounded using the threat models that consider other information about the possible magnitude of the error.
Moreover, BeiDou-3 navigation satellite System (BDS-3) was completed a few years ago, and Global Positioning System (GPS) has been fully operational since 1995. BeiDou Satellite-Based Augmentation System (BDSBAS) was just constructed, while WAAS passed test certification and started providing services to civil users in 2003. Thus, BDSBAS urgently needs a satellite integrity monitoring method under the conditions that its monitor stations are deployed in the domestic region and their layout is limited. BDSBAS faces many difficulties in integrity monitoring, especially in the south of the inverted triangle and the edge area of the monitoring network.
Although several algorithms have been developed for satellite integrity monitoring, there are some problems to be solved. Since SBAS has a top-level safety requirement, any integrity risk issues shall be considered. The direct statistic methods cannot provide UDRE and MT28 accurately, and the integrity bounding rate in the range domain is too low to bound the range error from the clock-ephemeris. The latest CB method noted as the WAAS CB method shows a good performance with the dependence on three expensive receivers set at a monitor station and overseas monitoring network layout, which is not applicable to Chinese BDSBAS. The problems are how to estimate integrity information accurately and improve the integrity bounding rate in the range domain and develop a method suitable for Chinese monitoring network layout or terrain, which motivates the authors to put forward a method for satellite integrity monitoring.
An improved covariance-based method is developed to determine satellite integrity information. Firstly, the covariance matrix of satellite clock-ephemeris correction errors output from the correction processor is adjusted by a scale model and a groove model. Subsequently, the adjusted matrix is decomposed into the user differential range errors and clock-ephemeris covariance matrix. Finally, the performance of the proposed method is analyzed in the range domain and position domain. Compared with the direct statistic method, the integrity bounding rate and the availability are both improved obviously. Compared with the covariance-based method, the convergence rate for the user differential range error is faster by 8.04%. The contributions are listed as follows: 1. An improved covariance-based method is proposed to perform satellite integrity monitoring. The user equivalent range error is used to adjust the shape of (Li et al., 2020) the covariance of clock-ephemeris reducing the computational complexity, and the geometry between satellites and monitor stations is taken as new information source to compensate for the insufficient monitoring capability of monitor stations when a monitored satellite is moving over the boundary of the monitoring network. The geometry of the satellite and monitor stations is used to mitigate the random error or integrity threats improving the accuracy of the estimated satellite integrity information. 2. A groove model is developed to adjust the clockephemeris covariance and find a suitable shape covariance matrix. This model is used to construct a near-optimal solution rather than the theoretically best broadcast covariance matrix, which needs complicated computation. When a satellite is under a bad monitoring geometry, both the pseudorange residuals and the geometry information are used to determine the modifier for the clock-ephemeris covariance. The model provides an idea to perform satellite integrity monitoring by considering the motion process of a satellite above its monitoring network.
This article is arranged as the following. Section II describes the preliminaries of satellite integrity and the problem under discussion. Section III presents the model and the process to deduce satellite integrity information. In Section IV, the performance of the proposed method is compared with the state-of-the-art methods. Section V concludes the advantages and characteristics of the proposed method.
Preliminaries and problem formulation
In this section, the concept of UDRE and its modifier (or MT28) is introduced in the subsection preliminaries, which is the basis of satellite integrity monitoring. Then, the problem under consideration and the research objective are given in the subsection problem formulation.
Preliminaries
Satellite corrections include long-term corrections and fast corrections which are used to revise the slowly changing errors and rapidly changing errors of Satellite Clock-Ephemeris (SCE), respectively. The accuracy of the combined long-term and fast corrections is indicated by UDRE along with MT28.
The definition of UDRE was put forward in the early version of RTCA DO-229 (SC-159, 2016). The UDRE is used as the confidence limit of pseudorange residual errors corresponding to the satellite corrections at any point of the service volume of a monitored satellite in space and time. The UDRE is broadcast to support HPL and VPL by bounding the Horizontal Position Error (HPE) and Vertical Position Error (VPE) with a required probability, respectively. The UDRE is quantified by UDRE Index (UDREI) and the UDRE value represented by each indexed value is listed in the lookup table (SC-159, 2016). The table gives both a 3.29 sigma value (UDRE) and a 1 sigma value σ UDRE relative to clockephemeris errors (Wu & Peck, 2002).
Afterwards, the MT28 was proposed to improve the integrity and availability of SBAS. The MT28 provides the 10 entries of an upper triangular matrix which is used to construct a relative clock-ephemeris covariance matrix and further a location-specific error bound for each monitored satellite (Walter et al., 2001). Then, the satellite correction errors are bounded by UDRE along with the associated MT28. The UDRE along with MT28 represents the clock-ephemeris error bound and a user-level error limit in the line of sight between each satellite-user pair.
Problem formulation
In this subsection, the authors will present a detailed description of the problem under consideration.
In the direct statistic method, the integrity parameter σ 2 UDRE,DS i is given by (Li, 2018) where the designators i, j and k represent the satellite, monitor station, and epoch, respectively. dρ i j,k denotes the pseudorange residual between satellite i and station j. The variable dρ i denotes the mean of the set dρ i j,k during a UDRE update period. Then, UDRE is obtained according to the lookup table (SC-159, 2016).
This method demonstrates that the computed UDRE only bounds the pseudorange residual dρ i j,k with the probability of 75% approximately (Li, 2018;Li et al., 2018Li et al., , 2019. Obviously, there are some shortcomings of the direct statistic method. Basically, the integrity bounding rate shall be calculated by UDRE along with MT28, not only UDRE. Conclusively, the UDRE cannot bound the clock-ephemeris range residuals for individual satellites with a prescribed probability, and the integrity bounding rate is too low to meet the confidence level of satellite corrections. Essentially, since the samples of the set dρ i j,k are never adequate, UDRE cannot be precisely obtained (Decleene, 2000). The direct statistic method is not a proper method to create an integrity bound as it will never accumulate enough independent samples in any feasible time frame. The residual errors shall be overbounded using the threat models that consider other information about the possible magnitude of the error. As for the WAAS CB method, its characteristics can be listed as follows: 1. The WAAS CB method strongly depends on the performance of three sets of monitor station receivers which are costly. With three sets of receivers, WAAS shows a good performance, but the construction cost of monitor stations is quite high. The cost of each WAAS monitor station is millions of dollars, dozens of times that of each Chinese monitor station. 2. The WAAS CB method is closely related to the layout of monitor stations, and the performance of WAAS in the continental United States is ensured only by the broad layout of WAAS monitor stations some of which are deployed in Central Pacific, Canada, and Mexico. The WAAS CB method do not discuss how to perform satellite integrity monitoring under a limited monitoring network layout (or terrain) or in the edge area of the monitoring network. 3. WAAS is developed for civil applications, and the latest satellite integrity algorithm for WAAS aims at the improvement of integrity for safety-of-life users. WAAS algorithm emphasizes the integrity in range domain and position domain (Chen et al., 2018). To ensure the integrity of WAAS, the accuracy of Signal-In-Space (SIS) of WAAS is conservative and is not enough for precise positioning application (Chen et al., 2017b;Zheng et al., 2019Zheng et al., , 2022.
In a word, satellite integrity monitoring for satellitebased augmentation system in China faces many difficulties. DS method shows a low integrity bounding rate, and the current domestic satellite integrity monitoring cannot satisfy the requirements of SBAS. The latest covariance-based method noted as the WAAS CB method relies on high quality monitor stations and their broad layout, which is different from Chinese situations. However, the WAAS CB method sacrifices SIS accuracy for satellite integrity.
The problem under consideration is how to determine the confidence limit for satellite corrections. Specifically, the problem under discussion is how to perform satellite integrity monitoring under the monitor stations with limited quality and their limited layout. Three assumptions are given below: 1) The pseudorange between a satellitemonitor station pair contains satellite clock-ephemeris errors, and can be used to monitor the status of each satellite; 2) User equivalent range errors between satellites and monitor stations reflect the monitoring capability of a monitoring network composed of widely distributed monitor stations, and can be used to determine a suitable shape covariance matrix for satellite clock-ephemeris; 3) The geometry between satellites and monitor stations is relative to the monitoring capability of the monitoring network, and can be used to find the suitable shape covariance matrix of satellite clock-ephemeris. The objective is to develop a method to determine UDRE and MT28 for each monitored satellite.
Determination of integrity information for SBAS
In this section, a scale model and a groove model are developed for SBAS to perform satellite integrity monitoring.
Scale model based on multiple error sources
The pseudorange correction error dρ , namely User Equivalent Range Error (UERE), for a specific satellite is computed by where the variables �ρ, �R, �B represent the synchronized pseudorange residual from monitor stations, the total long-term corrections computed by long-term satellite error corrections, and the total fast corrections computed by fast corrections and range-rate corrections, respectively. The 4 × 1 vector I consists of a unit vector and one element − 1. The first three terms of this vector are the components of the unit vector along the line of sight in Earth-Centered Earth-Fixed (ECEF) coordinates (Wu & Peck, 2002).
Specifically, the pseudorange residual in (2) due to the ephemeris and clock for a satellite-user pair with the line of sight I is theoretically given by (Blanch et al., 2012) where x and x BD represent the true clock-ephemeris, and the clock-ephemeris computed by broadcast ephemeris and SBAS corrections, respectively. UDRE along with MT28 is the upper bound on this pseudorange residual for each satellite-user pair. According to SC-159 (2016), the error bound is expressed in the following form (Blanch et al., 2014) where the matrix C ov MT28 is a 4 by 4 matrix. The parameter K 7 denotes the quantile of 99.99999%. The parameter σ flt denotes the clock-ephemeris standard deviation determined by UDRE and MT28.
To deduce the error bound for clock-ephemeris, there are four cases to be considered (Blanch et al., 2012): (1) nominal errors from the monitoring network receivers; ( (2) nominal biases or antenna biases; (3) satellite correction errors; (4) possibly undetected errors in the monitoring network receivers where one station is assumed to return erroneous measurements. As for case (1), the relationship between the bound on the estimation error and the probability P HMI of Hazardously Misleading Information (HMI) is described by where K HMI represents the quantile related to the probability P HMI and P SCE denotes the covariance for the state x of clock-ephemeris. The probability P HMI is determined by an integrity allocation strategy (Lu et al., 2021;SC-167, 1992;Schempp et al., 2001;Wu & Peck, 2002).
According to (5), the error bound on the estimation error under the nominal conditions is given by The bound L 1 is one upper bound on clock-ephemeris errors before the clock-ephemeris covariance matrix is broadcast. The nominal biases b such as antenna biases and Code Noise and Multi-Path (CNMP) termed as the case (2) is described as a Gaussian vector with expectation b and covariance W −1 . As for the line of sight, the contribution of these biases can be given by I T Hb . An upper bound of this variable is deduced by (Blanch et al., 2012) The parameter K bias associated with antenna biases is calculated in real time as a function of its covariance and the maximum biases (Shallberg & Sheng, 2008). Therefore, when case (2) is considered, the error bound on the estimation error is adjusted by where L 2 denotes the bound for clock-ephemeris errors under cases (1) and (2).
When the error from the broadcast clock-ephemeris or the quantization error of satellite corrections termed as the case (3) is considered, the upper bound for this error can be deduced by the following inequality The designator i represents the epoch of each update interval. When case (3) is taken into consideration, the error bound on the estimation error is updated by Obviously, the new error bound satisfies which is in accordance with the constraint (5).
As for case (4), the reliability of each monitor station can be guaranteed by checking the observations of multiset receivers which refers to a problem of data quality monitoring or system reliability (Hamada, 2008;Yin & Chai, 2020). As for WAAS monitor stations, three sets of receivers are equipped to collect observations (Parkinson et al., 1996). The observations from three receivers are used to conduct the cross check among three threads to remove erroneous observations (Parkinson et al., 1996;Shallberg & Sheng, 2008). Moreover, dual frequency pseudoranges can be smoothed by dual frequency carriers, and the related algorithm like IFree filter is chosen to improve the quality of the observations simultaneously with ionospheric delay removed (Hwang et al., 1999;Konno et al., 2006).
Before message type 28 is broadcast, the covariance matrix of clock-ephemeris needs to be adjusted to meet the integrity requirement. According to (4) and (11), the adjusted covariance matrix can be obtained by The matrix denotes the clock-ephemeris covariance considering four cases.
After the message type 28 is available, the quantization error related to this message type needs to be protected either by the term ε C or by increasing the broadcast UDRE (Walter et al., 2001). Computing the theoretically optimal matrix P brdc refers to a complicated mathematical problem, and a near-optimal method is adopted in practice. The authors choose a scaling method where the matrix P brdc is obtained by finding a suitable shape covariance matrix and then scaling it so that the integrity safety condition is met (Walter et al., 2001;Wu & Peck, 2002). The integrity condition to be satisfied is 14) σ UDRE δ UDRE ≥ I T P 1 I Then, the covariance matrix can be adjusted again by where the parameter P max is deduced by The set I SV represents all the 4 by 1 vectors located in the Service Volume (SV) of a monitored satellite. The Worst User Location (WUL) is determined by an analytic method (Shao et al., 2011a;Zhao et al., 2014).
In (16), the true value of σ UDRE is not known and therefore an overbound must be determined. Under the assumption that the range error dρ from the clockephemeris satisfies the condition dρ ∼ N µ, σ 2 , the following formula can be obtained Considering the inequality the following inequality can be obtained Hence, the Gaussian distribution N µ, |µ| K HMI + σ 2 is used to find the bound of dρ and its standard deviation is computed by Several situations are taken to tackle the integrity threats. Firstly, to meet the strict Gaussian overbounding properties required by the WAAS integrity monitoring, the CNMP algorithm including mean filter and mean error function is adopted to reduce the effects of multipath (Decleene, 2000;Shallberg et al., 2001). Secondly, the right tail Cumulative Distribution Function (CDF) is used to bound the probability of HMI, and the thresholds for the error in the corrections and the noise in the measurements can be derived (Schempp et al., 2001). More importantly, the authors also develop a model to improve the monitoring capability of monitor stations, which will be introduced below.
Groove model based on satellite-station geometry
In this subsection, a model is constructed to improve the performance of the bound of clock-ephemeris range errors. The algorithm for s is adjusted under different situations. A groove model is proposed to provide a solution.
To improve the insufficient monitoring capability of monitor stations when a satellite is moving over the boundary of the monitoring network, the geometry between the satellite and monitor stations is introduced as a kind of prior information to overcome the shortcomings of the method to determine the parameters.
The geometry of these monitor stations tracking a specific satellite is evaluated by Monitoring Geometric Dilution Of Precision (MGDOP), or further ln(x MGDOP ) which is computed by the corresponding geometry matrix (Chen et al., 2017a). The relationship between MGDOP and UDREI is analyzed in (Chen et al., 2017a;Shao et al., 2009Shao et al., , 2011a. Based on this, the geometric information is used to adjust the shape of the clockephemeris covariance to ensure the satellite integrity. Taking satellite PRN 6 as an example, the number of the monitor stations tracking PRN6 and the WAAS-reported σ UDRE of PRN6 for one day are shown in Fig. 2. The lateral axis denotes time (t) in unit of day (d).
As shown in Fig. 2, the number of the monitor stations varies from 0 to over 30 and changes fast. Accordingly, ln(x MGDOP ) varies conversely and synchronously. The trend of σ UDRE is the same as that of ln(x MGDOP ) , and ln(x MGDOP ) can be taken as supplementary information to deduce s. Apparently, the trend of σ UDRE or ln(x MGDOP ) for all satellites likes a groove. Therefore, the geometry of a specific satellite and the monitor stations, namely, MGDOP, can be taken as the boundary condition and further used to compensate the algorithm for deducing s, denoted as s DOP . Based on these, a groove model is developed to describe the trend of s as illustrated in Fig. 3. According to this model, the algorithms to compute s are given by where the parameters s and s DOP are computed by user equivalent range errors and geometry information, respectively, and NaN denotes not a number. For simplicity, let U α EL(2) ,α EL(1) denote the set of UEREs with their ELevation angle (EL) α EL under the condition α EL(1) ≤ α EL ≤ α EL(2) and N U α EL(2) ,α EL(1) represent the sample size of set U α EL(2) ,α EL(1) . As described in (22) and Fig. 3, the algorithm for s 1 is divided into three parts by the sample size N 0 of set U α EL(2) ,α EL(1) . After the analysis of the requirement, the value of N 0 is set as 8.
There are three intermediate variables to obtain the final parameter s. The parameter s limit is used to limit or bound the noise of pseudorange residuals relative to many error sources and can be computed by the standard deviation of the pseudorange residuals (or UEREs) (Blanch et al., 2012;Chen et al., 2017bChen et al., , 2018. The variable s is used to evaluate pseudoranges (or UEREs) in real time, and further monitor the state of a specific satellite. UEREs are processed by CNMP algorithm and right tail CDF, and then used to deduce the variable s. The variable s DOP takes the geometry between the satellite and monitor stations as another information source to compensate for the insufficient monitoring capability of monitor stations when the satellite is moving over the boundary of the monitoring network. The geometry information can be translated into s DOP according to the relationship between them (Chen et al., 2017b;Shao et al., 2009Shao et al., , 2011b.
NaN, otherwise Finally, the covariance matrix for clock-ephemeris after the above processes is updated by The matrix is used as the final covariance for clockephemeris errors which will be formatted into UDRE and MT28.
To broadcast the clock-ephemeris covariance, one needs to compress the matrix P 3 into UDRE and MT28. Firstly, P 3 is projected along the vector from the monitor station to a satellite, and the maximum of the projection is obtained by searching the service volume (Shao et al., 2011a;Zhao et al., 2014). Secondly, UDRE can be obtained by searching the lookup table of UDREI to bound the projection (SC-159, 2016). Thirdly, the matrix C to be broadcast can be determined by finding the optimal solution with the minimum of quantization error for the clock-ephemeris covariance (Chen et al., 2018;Kailath et al., 2000). Finally, UDRE and MT28 are determined and updated within their update interval.
Analysis and results
In this section, the performance of the proposed method (noted as ICB method) is compared with the direct statistic method (Li, 2018) and the latest WAAS covariancebased method noted as CB method which is not openly available, but updated from the old version (Walter et al., 2001;Wu & Peck, 2002). The direct statistic method stands for the latest method adopted in Chinese engineering practice. The covariance-based method refers to the method adopted by WAAS which shows the best performance and is considered as the latest covariancebased method. Therefore, the two methods are used as comparison. The analysis is presented to demonstrate the rationality and effectiveness of ICB method. As for data source, the broadcast ephemeris and observation data are from the websites of international GNSS service and national geodetic survey. The data is processed in monitor stations and master stations with the three methods to generate the corresponding satellite integrity information. The monitor stations located in North America shown in Fig. 4 are used to analyze the performance of ICB method. There are two parts in this section: the performance in the range domain and performance in the position domain . In the first subsection, the integrity bounding percentages between UDRE (along with MT28) and UERE are computed. In the second subsection, the availability is analyzed.
Performance in the range domain
The integrity bounding percentages between UDRE (along with MT28) and UERE by ICB method are compared with these by the state-of-the-art methods (DS method and CB method). The UDREs (along with MT28) of all GPS satellites are used to calculate the integrity bounding proportions with respect to 36 users (William, 2022). The mean of the integrity bounding proportions with the three methods for each satellite is depicted in Fig. 5. The mean of the integrity bounding proportions of DS method, ICB method, and CB method is 79.96%, 99.88%, and 99.90%, respectively. The integrity bounding rate of ICB method is 24.91% higher than that with DS method, close to the one with CB method. The integrity bounding proportion of ICB method is higher than that with DS method obviously.
Finally, the UDRE convergence rate for each satellite is calculated and shown in Fig. 6. As can be seen in Fig. 6, the convergence rate for the UDRE with ICB method is in between these with CB method and DS method. In detail, this indicator with ICB method is 8.04% higher than that with CB method and 19.90% lower than that with DS method. In other words, ICB method reveals a better performance in the aspect of safety. Short convergence time is significant for SBAS integrity monitoring, especially for BDSBAS, the GPS Aided GEO Augmented Navigation system (GAGAN), and the Mtsat Satellitebased Augmentation System (MSAS) whose monitor stations have smaller spacing compared with WAAS.
Performance in the position domain
The performance in the position domain is compared among the three methods in this subsection. The availability of LPV200 service is compared among three methods.
The availability of 36 users is analyzed as shown in Fig. 7 Fig. 7 Availability of LPV200 service at each selected user higher than that by DS method and 0.77% lower than that by CB method. Considering that the users 9 and 22 are located in the boundary area of North America and their observations are very poor, the results are accidental and abnormal. The conclusion is the availability with ICB method is higher than that with DS method obviously and similar to that with CB method.
In a word, ICB method improved the integrity bounding rate and the availability dramatically compared with DS method. The performance of ICB method is close to that of CB method.
The signal-in-space errors of the proposed method and broadcast ephemeris method are smaller than that of WAAS method. The accuracy of signal-in-space of the proposed method is over 18.22% higher than that with broadcast ephemeris method in the three orbital dimensions while the accuracy of signal-in-space of WAAS method is lower than that of the broadcast ephemeris method in these three dimensions. The accuracy of signal-in-space of the proposed method is over 32.03% higher than that of WAAS method in orbit. As for satellite clock, the accuracy of signal-inspace of both the proposed method and WAAS method is lower than that of broadcast ephemeris method. The accuracy of signal-in-space of the proposed method is over 25.74% higher than that of WAAS method in clock. In total, compared with WAAS method, the proposed method can improve the accuracy of signalin-space by over 25.74%. We can conclude that WAAS is designed to ensure the integrity at the expense of accuracy of signal-in-space and the proposed method can be used for some precise positioning application (Zheng et al., 2019(Zheng et al., , 2022.
The influence of different parameter values involved in the Groove Model has been analyzed. A detailed analysis of integrity needs too many observations because it is relative to many monitor stations and its layout. For simplicity, the experiences tell that the more conservative the parameter N 0 , the higher the integrity in the edge area of the monitoring network will be and the lower service availability.
Also, the proposed method is applied for satellite integrity monitoring in China. The results reveal that compared with the direct statistic method, the integrity bounding rate in the pseudorange domain and the availability in the position domain are improved by 45.39% and 2.32%, respectively. The proposed method can provide APV-I services for the most parts of China and even LPV200 services for some parts of China (Zheng et al., 2022). The proposed method was tested with just a set of Chinese receivers at a monitor station, and the performance is satisfactory with limited quality monitor stations and their limited layout, which can be used for satellite integrity monitoring in south China of the inverted triangle and the edge area of the monitoring network.
Discussions and conclusions
An improved covariance-based method is developed to perform satellite integrity monitoring. Compared with the direct statistic method, the integrity bounding percentage is improved by 24.91% and the availability by 5.63%. Compared with the covariance-based method, the convergence rate for user differential range errors is improved by 8.04%. The advantages of the proposed method are summarized as follows: 1. The proposed method concerns both the integrity and the availability. The clock-ephemeris covariance matrix is adjusted by considering threat models and Gaussian overbounding theory to guarantee that the parameter UDRE along with MT28 bounds the range error from clock-ephemeris with a high probability. 2. The scale model and groove model are beneficial for understanding the concept of SBAS integrity. These models are used to address the problem of the inaccurate estimation of clock-ephemeris covariance considering the geometry between satellites and monitor stations. The scale model is used to adjust the covariance matrix of clock-ephemeris with the consideration of abnormal cases. The groove model considers the geometry between satellites and monitor stations, which is used as prior information to compensate for the insufficient monitoring capacity of monitor stations when a specific satellite is moving over the boundary of the monitoring network. The groove model mitigates random errors or integrity threats, improves the accuracy of satellite integrity information, and provides an idea to perform satellite integrity monitoring by considering the motion process of a monitored satellite above its monitoring network, which is beneficial for improving the tracking capability of the monitoring network with respect to the satellites just over its boundary.
In a word, the proposed method can provide LPV200 service for most North America. To improve the performance further, it is still necessary to investigate some other issues in the future. One important issue is that the data preprocessing of monitor stations needs to be optimized due to its impact of data quality and even satellite integrity monitoring, which is the key to improve the integrity bounding rate. Another issue is related to the reliability of the proposed method. The availability at some selected users is not adequate and the method needs to be refined to guarantee its robustness. Future work will include the optimization of the preprocessing of monitor stations and the improvement of the reliability of the proposed method.
|
v3-fos-license
|
2019-08-01T01:19:12.190Z
|
2019-07-16T00:00:00.000
|
198962937
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/88C604FD4CEE442B5CF38AB884AC981C/S1463423619000537a.pdf/div-class-title-role-of-laboratory-services-in-primary-health-center-phc-outpatient-department-performance-an-indian-case-study-div.pdf",
"pdf_hash": "be41ffe3c2b5a05a5117a0e5a3d1fdea6968fd61",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:711",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "be41ffe3c2b5a05a5117a0e5a3d1fdea6968fd61",
"year": 2019
}
|
pes2o/s2orc
|
Role of laboratory services in primary health center (PHC) outpatient department performance: an Indian case study
Background: In resource-constrained settings, primary health centers (PHCs) are critical for universal health coverage. Laboratory service is one of its important components. While PHC and its performance are focused, its laboratory service has been neglected in developing countries like India. Aim: To determine the role of different level of PHC laboratory services on the overall PHC performance. Methods: Cross-sectional study based on 42 PHCs of Osmanabad District, Maharashtra, India was performed. The study used levels of laboratory services in PHC as independent parameter and PHC outpatient department (OPD) visits per day (≤ 80 versus > 80) as dependent parameter. The control parameters used in the study were number of medical doctors, availability of laboratory technicians (LTs) and population coverage by PHC. Field visit was done to collect data on levels of laboratory services, but secondary source was used for other parameters. The logistic regression analysis was performed in study. Findings: The study found variation in PHC population coverage (10 788–74 702) and OPD visits per day (40–182) across PHC. Strong positive association was observed between levels of laboratory services and number of OPD visits per day in PHC. PHC offering both malaria and tuberculosis in-house testing had higher odds (4.81) of getting more OPDs (≥ 80 OPD visits per day) as compared to PHC not offering in-house testing facility for malaria and tuberculosis. This association was stronger in PHCs with lower population coverage (0–75 quartile) as compared to PHCs with higher population coverage (75–100 quartile). Conclusion: Focus on laboratory services is needed to enhance the existing PHCs performance. Skill-up gradation of existing LT could help in improving the contribution of the existing laboratories in PHC functioning.
Introduction
Primary Health Centers (PHCs) enable cost-effective, accessible and universal coverage of health to the individual and community (World Health Organization, 1991). These PHCs are responsible to provide both preventive as well as basic curative services in poor rural areas of resource-constrained developing countries like India (MoHFW, 2012) with lack of developed private health care facilities.
Good functioning of PHC plays an important role in utilization of its services by the masses (Majumdar and Upadhyay, 2004;Monteserin et al., 2010). Laboratory service is recommended as an important component for good functioning of PHC. Studies have been focused on laboratory service in PHC regarding its type, quality (Jain and Rao, 2015;Devane-Padalkar et al., 2016), functioning (Nanjunda, 2011), utilization (Zunic et al., 2011;Baig et al., 2014) and relevance in disease control (George, 2011;Rizwan et al., 2013;Pakhare et al., 2015).
Despite such focus on PHC laboratory in the literature, 35.80 % of 25 354 Indian PHCs lack a laboratory technician (LT) to run the laboratory (Ministry of Health & Family Welfare, 2017). It is an important issue because PHC laboratory is the only diagnostic facility for people living in rural areas of developing countries like India. However, it has been neglected in the PHC settings for decades (George, 2011). A study by Planning Commission on PHCs in India has used parameters like outpatient department (OPD) visits, number of institutional deliveries or program-specific indicators as parameters to measure the PHC performance (Programme Evaluation Organization, 2001). In India, field experience indicates that district health officials may evaluate the PHC performance primarily for the non-laboratory outcomes like OPD visits and Maternal and Child Health services. This suggests that policy-makers may not be adequately convinced with laboratory relevance to ensure LT in all PHCs.
One of the reasons could be lack of literature explaining association between laboratory services and PHC performance. The knowledge of this association is important, especially in resource-limited health system settings, because policy-making is influenced by PHC overall performance rather than only laboratory performance.
Accordingly, this study aims to determine the role of different level of PHC laboratory services on the overall PHC performance. The literature has used different parameters to measure PHC performance. PHC performance has been focused in two ways namely patient-side assessment and provider-side assessment. In case of patient-side assessment, customer perceptions about the PHC services are evaluated (Sathyananda et al., 2018). In case of providerside assessment, World Health Organization's recommended framework is used (World Health Organization, 2000). The parameters used could be mapped on either the functions of the facility (governance, financing, resources and services) (Anant et al., 2016) or the objectives of facility (responsiveness, fairness and patient health) (Sathyananda et al., 2018). In the current study, the focus would be to choose a parameter which is commonly used by decision-makers to assess performance of PHC.
Study settings
The study had adopted the cross-sectional design approach. The study used PHCs in Osmanabad District, Maharashtra, India as the case study to determine the relationship between laboratory services and PHC performance. Osmanabad district was chosen as study site because the district provides a unique PHC laboratory service setting. In the current district setting, the level of laboratory service (LLS) does not influence the number of different laboratory tests available at the PHC. Consequently, the influence of difference in number of laboratory tests available at the PHC on patient utilization of PHC is assumedly controlled in the given settings. Further, permissions to conduct study within the given time frame and resources could be obtained from the administrative authorities. Detailed explanation of PHC laboratory functioning in Osmanabad district is mentioned in Supplementary material.
Parameters for study
In the current study, LLS was measured based on the number of different type of tests performed in PHC in-house laboratory. In the district, PHC could be categorized into three types of LLS namely: (1) all basic tests done in-house (AID), (2) all basic tests done in-house except tuberculosis test (AIDet) and (3) all basic tests done in-house except tuberculosis and malaria (AIDetm).
The parameter used for measuring PHC performance was based on three criteria: (1) it should be used by district officials in decision-making, (2) ease of availability of data across country and (3) parameter should not represent single program or disease. These criteria were used to ensure replication of study in other areas. Consequently, the parameter used to measure the overall PHC performance was OPD visits per day. The idea of measuring patient visits to PHC has been used in the literature like measuring number of patient visits for first contact (Sathyananda et al., 2018) and child delivery (Kashyap, 2016).
The PHC data on outcome parameter, 'number of OPD visits', were categorized into 'low OPD visits/day' PHC and 'high OPD visits/day' PHC. 'Low OPD visits/day' PHC category was for those PHCs that got less than or equal to 80 OPD visits per day. 'High OPD visits/day' PHC category was for those PHCs that got more than 80 OPD visits per day. Threshold for number of OPD visits per day was selected to be 80 because this is the recommended number of OPD visits for PHCs by Indian Public Health Standards. Table 1 shows the various control parameters used in the study.
The study also used another additional parameter, 'number of samples collected for testing', to determine if LLS in PHC and 'number of samples collected for testing' have any association. Two indicators were used for parameter 'number of samples collected for testing' namely: (1) 'number of malaria samples collected' and (2) 'number of tuberculosis samples collected'. These indicators are transformed into binary categorical parameter type. The steps involved in creating binary parameter for indicator, 'number of malaria samples collected', were as follows: 1. Mean number of malaria samples collected across the PHCs (n = 42) in district was calculated. 2. If number of samples collected by any PHC was more than the mean value calculated in previous step, the PHC was assigned value 'one', else it was assigned value 'zero'. 3.
Step 2 is repeated for all the 42 PHCs.
Similar approach was used to assign categorical values for indicator, 'number of tuberculosis samples collected', to all PHCs (n = 42) collecting tuberculosis samples. 2 Rahi Jain and Bakul Rao
Data collection
The data regarding PHC performance, that is, OPD per day were provided by district health office of Osmanabad from their database. The data on 'number of malaria samples collected' and 'number of tuberculosis samples collected' were collected from district health office, Osmanabad. The secondary data were from April,2015-March,2016. The data on the 'population covered under each PHC' (PC) were obtained using secondary sources. The information regarding number and name of villages under each PHC was obtained from district health office of Osmanabad. The population of each village was obtained from Census 2011 database (Office of Registrar General and Census Commissioner, 2011). Consequently, population of all villages under each PHC was summated to estimate the population covered by the PHC. The calculated population coverage under each PHC was then divided into four quartiles.
Field visit to all PHCs in district (n = 42) was done during July-August 2015 in order to collect data on parameters namely LLS in PHC, number of medical doctors (NMD) and laboratory technician availability (LTA). During the visit, respondents were asked to provide responses for the following multiple-choice objective questions to collect the data: 1) What type of tests was performed in the PHC in-house laboratory? 'All basic tests', 'All basic tests but not tuberculosis test', 'All basic tests but not malaria and tuberculosis test'. 2) How many medical doctor/s are posted to PHC? Zero, one and two. 3) Does the PHC have LT posted? Yes or No.
The oral informed consent was obtained from each respondent regarding the purpose of questions and use of the responses. The respondents were explained the meaning of question both in English and in local language namely Hindi. The respondent preferred for providing responses was medical doctor or LT posted at PHC. However, in one PHC, no medical doctor or LT was available during visit so response from other PHC staff was obtained. The field data regarding each PHC were shown to district health officials to ensure validity of responses because these officials know PHC staff details and provide laboratory test consumables to PHC.
Data analysis
The logistic regression analysis was performed to estimate the effect of the main parameter as well as control parameters on dependent parameter for determining significant predictors of the PHC performance. The PHC performance parameter, 'number of OPD visits', was used as the dependent parameter, y. The dependent parameter categories namely 'low OPD visits/day' PHC and 'high OPD visits/day' PHC were binary coded as y = 0 and y = 1, respectively. The analysis was done in R software. The study performed univariate and multivariate logistic regression in two scenarios.
In scenario one, univariate logistic regression was performed to estimate the influence of LLS, NMD, LTA and PC on the odds of PHC performance as 'high OPD visits/day' state (y = 1), rather than in 'low OPD visits/day' state (y = 0). Multivariate logistic regression was performed to estimate the combined influence of each of the main and control parameter on the odds of PHC performance as 'high OPD visits/day' state (y = 1), rather than in 'low OPD visits/day' state (y = 0).
In scenario two, both univariate and multivariate logistic regression were performed using the approach adopted in scenario one, except that fourth quartile of PC parameter, that is, 75-100 quartile, was omitted from the analysis. The scenario two was performed because the results from scenario one (for details see 'Results' section) suggested that PHC over 75 quartile of population was also having influence on OPD visits/day in the PHC.
In another analysis, the conditional probability of number of samples collected for malaria and tuberculosis test with different LLS in PHC was determined. Cramer's V test for significance test was performed.
Characteristics of the PHC
The characteristics of the parameters are shown in Table 2. In this study, daily OPD lies in range of 40-182. The number of PHCs is equally divided between daily OPD visits of less than 80 and daily OPD visits of more than 80. In terms of human resources, 20 PHCs have two medical doctors posted while remaining PHCs have only one medical doctor. Although, none of the PHCs is without medical doctor, many PHCs (45.24 % ) do not have the LT, so they cannot perform the malaria and tuberculosis tests. All the PHCs with LT (n = 23) provide malaria blood smear examination for malarial parasite. Among the 23 PHCs with LT, 9 PHCs have been given training for tuberculosis diagnosis, so they in addition to other tests, also perform sputum testing for tuberculosis diagnosis. In terms of population covered by the PHC, the average population coverage per PHC is 32 317. However, individual PHC population coverage can vary from 10 788 to 74 702.
Role of LLS in overall PHC performance
As Table 3 shows, the logistic regression for 'low OPD/day' and 'high OPD/day' identifies PHC with LLS having 'AID' to be significant. This means that probability (more precisely, odds) of PHC having AID satisfies high OPD (y = 1) is 4.81 times higher than probability of AIDetm satisfies the same. The ratio (4.71) to be significantly distinct from 1 indicates that it is statistically significant predictor of being in 'high OPD visits/day' state, rather than in 'low OPD visits/day' state. Among control parameters, PC having '75-100 quartile' is significantly influencing the PHC performance. Even, the multivariate logistic regression has obtained the same results.
During the logistic regression, PHCs with very high population coverage, that is, 11 out of 42 PHCs that are present in '75-100 quartile' category of PC parameter, were removed from the study ( Table 3). The univariate logistic regression for the 'low OPD visits/ day' and 'high OPD visits/day' identifies PHC with LLS having 'AID' to be significant, but no control parameter is found significant. In case of multivariate logistic regression, none of the parameters are found significant. Further, the odds ratio for LLS is higher when PHCs in '75-100 quartile' category of PC parameter is not considered in regression compared to regression done using 42 PHCs.
Association between LLS and number of laboratory samples collected at the PHC
The conditional probability of number of samples collected for malaria and tuberculosis test with different LLS in PHC is determined (Table 4). The study finds that in case of malaria sample collection, the probability of above-average sample collection is more in case of PHC with LLS having 'AID' (0.78) as compared to PHC with LLS having 'AIDet' (0.36) or 'AIDetm' (0.47). Similarly, the probability of above-average tuberculosis sample collection is more in case of PHC with LLS having 'AID' (0.89) and 'AIDet' (0.57) as compared to 'AIDetm' (0.26). The medium strength of association is observed between collection of samples for test and LLS using Cramer's V test for both the malaria (0.49) and tuberculosis (0.31) sample collection.
Discussion
PHC is an important health care facility in rural areas, but the approach to ignore laboratory in PHC facility may not be appropriate to maximize the PHC performance. The study showed that association between LLS in PHC and number of OPD visits per day is positive. This positive association during univariate logistic regression analysis is found significant at 90% confidence interval for PHC providing all tests, that is, PHC with AID. The very high odds ratio (4.81, 4.21, 6.25 and 5.52) suggests the strength of this association. In the literature, the strong positive correlation was obtained between laboratory service-related parameters and overall hospital performance (composite of patient results, staff and work system result, hospital efficiency and effectiveness result and flexibility performance) for Jordanian Hospitals (Ali and Alolayyan, 2013). The study on US hospitals showed that clinical technology inclusive of laboratory technology drives the hospital clinical quality and financial performance (Li and Collier, 2000).
However, the LLS in PHC was not found to be a significant predictor of overall PHC performance in multivariate analysis, which was unexpected. Further, the large confidence interval indicates that some precautions are needed in interpreting the absolute effect of LLS in PHC on PHC performance. These findings suggest that LLS in PHC could be a strong trigger to improve the PHC performance, but alone it is not an enough condition to improve the PHC performance. In the Indian context, patient could access public laboratory facility only on referral from medical doctor (Jain and Rao, 2015). Thus, the laboratory can help the physician in better decision-making, which could lead to better PHC performance. The literature had suggested that laboratory results could contribute up to two-third of medical decision-making (Forsman, 1996). Further, the literature had identified various reasons that could disrupt physician role in PHC like lack of resources (Hazra and Das, 2016) and medical doctor motivation (Shah et al., 2016).
The study showed that LLS is more relevant for PHCs with population catchment area less than or equal to IPHS recommendations. Odds ratio of LLS in PHC is higher when PHC with very high population coverage is not considered vis-à-vis when considered. The PHCs with high population coverage can get more OPDs because of their catchment area. However, PHCs with smaller population coverage may need to provide better services so that patients with different health care needs willingly visit the PHC.
Rahi Jain and Bakul Rao
In West Bengal, India, a strong correlation was reported between number of OPD hours and patient perception of PHC service quality (Bhattacharya, 2015). In another case study, it was reported that inadequate PHC service was affecting the number of patients whose needs could be catered by the PHC (Hazra and Das, 2016). The reduction in antenatal checkups from first to fourth checkup was observed due to poor PHC facilities (Dehury and Samal, 2016). The study showed that the level of training provided to LT influences PHC performance. The PHC performance shows very low and insignificant positive association with LT trained only in performing malaria test as compared to LT trained in both malaria and tuberculosis test. This suggests that Osmanabad district can enhance the overall performance of its 14 PHCs by training LT to conduct tuberculosis testing. This could allow better utilization of available resources. Based on the literature, an average annual cost of delivering health care services at an Indian PHC could be around $113 683-158 883 (1USD=64.69INR) (Prinja et al., 2016). This is relevant for resource-constrained countries like India that need to maximize the cost-effectiveness of its health care facilities.
The study showed that the PHCs with better LLS have reported more sample collection for testing. This suggests that higher LLS in PHC could increase the laboratory service utilization, which can help in improving PHC services. In the literature, a study had reported that change in LLS like reduction in laboratory turnaround time significantly reduced the patient length of stay (Holland, Smith and Blick, 2018).
The district used for the current study has daily OPD visits range (40-182) similar to the OPD visits range (25-150 OPD visits/day) reported in the literature (Rizwan et al., 2013;Dar, 2015;Raut-Marathe et al., 2015). In terms of population covered by the PHC, the average population coverage per PHC (32 317) (Bhatt and Joshi, 2013;Rizwan et al., 2013;Prinja et al., 2016;Tushi and Kaur, 2017). This reflects that the overall context in which PHC functions in the current study district is not far away from overall national scenario. Hence, relevance of this study results could extend beyond the district to whole country.
Finally, this study is relevant as it strengthens the need to put more focus on public laboratory service in developing countries like India. It provides an evidence to decision-makers that laboratory is important in enhancing PHC performance and achieving the greater goal of universal health coverage. The study has an important policy implication. It suggests that mere availability of laboratory tests may not be an enough criterion to make patients visit the PHC. It may be important to have better LLS like performing in-house tests.
In-house tests may help in reducing the turnaround time for the tests which could help doctors to quickly diagnose and provide appropriate medical services. The in-house laboratory testing is more relevant in primary health care settings, which is providing basic laboratory tests like microscopy-based tuberculosis test and malaria test. These tests take only few minutes to provide results, but in rural field settings of developing country like India, the delay in results could be as high as few days due to an issue in accessing laboratory facilities. For example, during field survey, interactions with locals and PHC staff suggested that many PHCs are not connected with any form of public transport to other nearby laboratory facility. Further, even if, public transport facility is available, the transport facility is limited to one or two trips per day. Additionally, the PHC commonly sends a staff to other laboratory facility only once a day.
One of the limitations of this study is small sample size restricted to a limited geographic area. This could be one reason for lower statistical power. Further, the study is not performed temporally but only longitudinally. Another possible weakness is that this study could have missed other laboratory-related important parameters that may play an important role in PHC performance. Additionally, the study does not consider the role of PHC staff other than LT and medical doctor in influencing the relationship between LLS in PHC and PHC performance.
Conclusion
The study concludes that laboratory services could play an important role in maximizing the PHC performance. Higher LLS in PHC could help in getting more visits in the OPD. The training of existing LTs could be a cost-effective approach in resource-constrained settings to maximize the returns from the existing medical workforce in PHCs. Finally, study found that PHCs with lower population coverage could benefit from higher LLS as compared to other PHCs in enhancing their performance in terms of number of OPD visits per day.
|
v3-fos-license
|
2024-05-11T15:15:57.140Z
|
2024-05-01T00:00:00.000
|
269676787
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://iopscience.iop.org/article/10.3847/1538-4357/ad35c1/pdf",
"pdf_hash": "32bd70b3918b68c999259340ac2af28c50b018e2",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:715",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "9b90052b9e37426c3efbd4817de5fc4b06b1bf01",
"year": 2024
}
|
pes2o/s2orc
|
Kilonova Parameter Estimation with LSST at Vera C. Rubin Observatory
The upcoming Vera Rubin Observatory’s Legacy Survey of Space and Time (LSST) opens a new opportunity to rapidly survey the southern sky at optical wavelengths (i.e., ugrizy bands). In this study, we aim to test the possibility of using LSST observations to constrain the mass and velocity of different kilonova (KN) ejecta components from the observation of a combined set of light curves from afterglows of γ-ray bursts and KNe. We used a sample of simulated light curves from the aforementioned events as they would have been seen during the LSST survey to study how the choice of observing strategies impacts the parameter estimation. We found that the design of observing strategy that is the best compromise between light-curve coverage, observed filters, and reliability of the fit involves a high number of visits with long-gap pairs of about 4 hr every two nights in the same or different filters. The features of the observing strategy will allow us to recognize the different stages of the evolution of the light curve and gather observations in at least three filters.
Introduction
The detection of GW170817, a binary neutron star (BNS) merger, using both gravitational waves (GWs) and photons, marked a groundbreaking milestone in multimessenger astronomy.Initially, GW170817 was identified solely by its gravitational-wave signal (Abbott et al. 2017a(Abbott et al. , 2017b)); subsequently, an array of electromagnetic (EM) signals from ground-based and space-borne telescopes covering the entire EM spectrum confirmed the presence of a luminous electromagnetic counterpart to the event (Abbott et al. 2017b).In particular, approximately 11 hr after the GW detection, the search for the EM signal of GW170817 led to the discovery of an electromagnetic counterpart named AT2017gfo associated with the GW signal (Cannon et al. 2012;Abbott et al. 2017b;Andreoni et al. 2017;Arcavi et al. 2017;Coulter et al. 2017;Díaz et al. 2017;Drout et al. 2017;Evans et al. 2017;Hu et al. 2017;Kilpatrick et al. 2017;Lipunov et al. 2017;McCully et al. 2017;Pian et al. 2017;Smartt et al. 2017;Soares-Santos et al. 2017;Tanvir et al. 2017;Troja et al. 2017;Utsumi et al. 2017;Valenti et al. 2017;Buckley et al. 2018).
This discovery played a crucial role in addressing numerous issues in high-energy astrophysics and fundamental physics.It significantly contributed to resolving the origins of short γ-ray bursts (sGRBs), the existence of kilonovae (KNe), and the processes behind heavy element synthesis.Additionally, it offered valuable independent constraints on two key aspects of astrophysics.First, it provided insights into the previously unknown equation of state of neutron stars (NSs), as discussed in Abbott et al. (2018).It also helped refine the understanding of the Hubble constant (Abbott et al. 2017c(Abbott et al. , 2021;;Cantiello et al. 2018;Fishbach et al. 2019;Hotokezaka et al. 2019;Kashyap et al. 2019;Coughlin et al. 2020aCoughlin et al. , 2020b;;Dietrich et al. 2020;Doctor 2020).These findings represent significant steps forward in our comprehension of the cosmos and have opened new avenues for future research in these fields.
The concept of KNe as transient phenomena powered by the radioactive decay of synthesized heavy r-process elements resulting from the ejection of neutron star matter during compact mergers was first highlighted by Lattimer & Schramm (1974).Several subsequent studies have contributed to our understanding of this event, including Li & Paczyński (1998), Freiburghaus et al. (1999), Lattimer & Prakash (2000), Metzger et al. (2010), Roberts et al. (2011), Tanaka & Hotokezaka (2013), Grossman et al. (2014), Metzger & Fernández (2014), Kasen et al. (2015), and Barnes et al. (2016).Along with the ejection of neutron-rich material, a relativistic jet is also produced.The jet, moving close to the speed of light, emits a powerful beam of γ-ray radiation, leading to a so-called sGRB prompt emission.As the jet interacts with the interstellar Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence.Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.medium, it decelerates and produces a detectable afterglow powered by synchrotron emission, observable from X-rays to radio frequencies.Before the detection of GRB 170817A, the connection between sGRBs and compact object mergers had been supported only by indirect evidence (Tanvir et al. 2013;Fong et al. 2015;Abbott et al. 2017b;Goldstein et al. 2017;Savchenko et al. 2017).However, the simultaneous detection of GWs and γ-rays demonstrated that at least a portion of sGRBs are indeed associated with the merging of BNSs.Currently, the conventional scheme for the progenitors of GRBs is a subject of debate, as counterexamples have emerged in recent years (Ahumada et al. 2021;Mei et al. 2022;Rastinejad et al. 2022;Troja et al. 2022;Yang et al. 2022).To complicate matters further, indirect detections of KN emission have been proposed, supported by the identification of optical and near-infrared (NIR) excesses in the flux of some GRB afterglows (Tanvir et al. 2013;Troja et al. 2019;Jin et al. 2020;Rossi et al. 2020;Rastinejad et al. 2022).
Constraining the properties of these particular events is crucial in resolving the ongoing debate regarding the dominant site for the production of r-process nuclei in the Universe.Some studies (e.g., Kasen et al. 2017;Anand et al. 2023) argue that BNS mergers are the primary source, while others (Siegel 2019) suggest the collapse of massive stars as the main contributor.
sGRB detection rates range between 10 and 40 per year for the GRB instruments on board the Neil Gehrels Swift Observatory (Gehrels et al. 2004) and the Fermi satellite respectively (Abdo et al. 2008).However, the optical counterparts for these bursts have proven to be elusive, mainly because the localization of Fermi sGRBs typically spans hundreds of square degrees (e.g., Mong et al. 2021;Ahumada et al. 2022).The follow-up of BNS and neutron star-black hole (NSBH) mergers detected by the International Gravitational-Wave Network, consisting of Advanced LIGO, Advanced Virgo, and KAGRA (LVK), during the third observing run (O3) has not been fruitful, possibly due to the fact that the GW sky maps are similarly large (Andreoni et al. 2019(Andreoni et al. , 2020a;;Coughlin et al. 2019;Goldstein et al. 2019;Gompertz et al. 2020;Kasliwal et al. 2020;Chang et al. 2021;Petrov et al. 2022).The lack of counterpart detections can therefore be explained by the fast fading nature of both KNe and afterglows, the large sky maps to observe, and the low local rate of compact binary mergers (Dichiara et al. 2020).
Empirical constraints on KN rates by optical surveys set an upper limit of R < 900 Gpc -3 yr -1 (Andreoni et al. 2020b;Andreoni et al. 2021) for KNe similar to AT2017gfo.Moreover, only a fraction of those will be detectable, as they could be beyond the detection limit of available telescopes.Obscuration and absorption by the Galactic plane is also a significant limitation to the detection of counterparts.In light of these constraints, the expected BNS detection rate of 4-80 events per year for the LVK network after 2020 (Kagra Collaboration et al. 2018b;Petrov et al. 2022), based only on GW searches, will likely provide only a few tens of detections throughout the next decade.
The new Legacy Survey of Space and Time (LSST; LSST Collaboration et al. 2009) is expected to be a game-changing facility in astrophysics.Time-domain astronomy will particularly benefit from the large ∼10 deg 2 field of view of the camera combined with the depth achievable with the 8.4 m diameter primary mirror, with an effective aperture of 6.423 m.
Depending on the choice of the LSST cadence, the project could unveil a large number of KNe and other types of fast fading transients (e.g., Andreoni et al. 2022).
The current estimates of KNe rates tell us that LSST will be able to detect ≈10 2 -10 3 events within z = 0.25 during the entire survey (Della Valle et al. 2018).Moreover, Andreoni et al. (2022) demonstrated that LSST is expected to find more than 300 KNe out to ≈1400 Mpc over a ten-year survey.Among those, we expect about 3-32 KNe recognizable as fastevolving transients similar to the one associated to GW170817.Furthermore, KNe have been analyzed only in association with other events such as GRB or GW detections, thus the possibility to detect and recognize such events is strictly related to the ability to survey as fast as possible the wide error boxes from GW signals and, once located, to promptly analyze their EM emission.
In spite of the technological and instrumental advances across multiple wavelengths, the fast-evolving nature of KNe will likely impede their spectral analysis.For this reason, this paper aims to analyze multiple observational strategies that only rely on photometry to derive physical parameters of KN sources without using spectra.Throughout this paper we assume that we know the location and the energy of the merger from other messengers (GW and GRB).
We consider that KNe can potentially be detected as a possible additional component to the optical and NIR afterglow of short GRBs in the temporal window that lasts from few hours to a few weeks after the onset of the burst (e.g., Kasen et al. 2015;Barnes et al. 2016;Fernández & Metzger 2016;Metzger 2017).This assumption follows the findings of Rossi et al. (2020) where the authors were able to isolate a golden sample of GRB afterglows, the behavior of which indicates the presence of a KN component in the afterglow light curves.However, they stated that strong constraints on redshift or NIR observation are needed to be able to find a KN contribution in the afterglow.In this paper we used a similar method to Rossi et al. (2020) for comparing GRB afterglow light curves with KNe in all the LSST observable bands to define an optimal observing strategy that can enhance the ability to detect KNe and characterize their sources, even using their indirect observations through GRB afterglows.
The paper is organized as follows: in Section 2 is described the impact of parameter estimation to understand the physics behind the KN explosion; in Section 3 the methodology of the simulations and analysis of the extracted data are described; in Section 4 the observing strategies are summarized; Sections 5 and 6 are dedicated to the analysis of the model and features of the observing strategies that impact the parameter estimation; Sections 7 and 8 are dedicated to the description of the obtained results; finally Section 9 is dedicated to the discussion of the results and the conclusion.
The Need for Parameter Estimation
KNe observations encode information on both ejecta properties and ejection mass processes that happen during the merger and post-merger (Metzger 2017;Villar et al. 2017;Coughlin et al. 2018).Part of the NS (whether the system is a BNS or an NSBH) can be expelled and become unbound (Davies et al. 1994;Rosswog et al. 1999).Tidal forces right before the merger can cause partial disruption of the NSs (or of the single NS in the case of the NSBH) with material launched at mildly relativistic velocities in the orbital plane of the system.Once the accretion disk is formed, neutrino radiation and nuclear recombination with magnetodynamic viscosity can drive mass outflow from the disk (see Ascenzi et al. 2021, and references therein).The velocity, the mass, and the geometry of the ejecta strictly depend on the properties of the system involved, thus a statistical study of a population of KNe would help us in understanding the distribution of those parameters.
Several studies of GW170817 attempted to infer the amount of material ejected (e.g., Alexander et al. 2017;Chornock et al. 2017;Cowperthwaite et al. 2017;Metzger 2017;Smartt et al. 2017;Tanvir et al. 2017;Coughlin et al. 2018Coughlin et al. , 2019;;Breschi et al. 2021;Heinzel et al. 2021;Ristic et al. 2022;Collins et al. 2023).However, the extent and the properties of the ejected material from this event remain uncertain and in considerable tension with theoretical expectations for the amount of each type of ejecta component (Radice et al. 2018;Korobkin et al. 2021;Nedora et al. 2021;Collins et al. 2023), likely in part because of underestimated uncertainties in these theoretical evaluations of ejecta (Henkel et al. 2023).KNe are rare and faint events compared to supernovae and other common classes of extragalactic transients, so they are hard to detect.Observational constraints, due to their intrinsic peculiarity, affect our understanding of the processes the source undergoes.This implies that we are not able to grasp the complexity of the compact object merger without making some important simplification, such as assuming a simplified treatment of the radiative transport that overlooks detailed three-dimensional anisotropic considerations.Similarly, opacity is considered in a simplified manner, without taking into account sophisticated nuclear reaction networks and composition variations (e.g., Wollaeger et al. 2018Wollaeger et al. , 2021)).Recent calculations using improved anisotropic radiative transfer and opacity calculations still arrive at similar conclusions for the description of the AT2017gfo event (Metzger 2017;Almualla et al. 2021;Heinzel et al. 2021;Ristic et al. 2022;Collins et al. 2023): a relatively high mass of the blue component is expelled from the poles while a significant mass of the red component is preferentially ejected toward the equator.
In particular, Ascenzi et al. (2019) show the posterior distribution of the KN parameters (ejecta mass, velocity, and lanthanide content) extracted from the observed multiwavelength afterglow light curves, assuming that the excess flux of the GRB afterglow light curve can be related to the radioactive decay from the KN ejecta.To estimate the parameters, the authors used a joint model of KN and GRB afterglow to reproduce the observed data; they produced a statistical distribution of the value from a sample of observed GRB afterglow light curves they claimed to be associated with KN events.The uncertainties on the derived parameters are considerable, primarily because of the sparsely populated light curves and the incomplete understanding of the nuclear reaction network responsible for producing the KN component.As a consequence of these uncertainties, certain cases result in parameter distributions that appear nearly uniform, lacking distinct patterns or trends.
Well constrained parameters will allow us to: (i) add features to the theoretical description of the event or (ii) break degeneracy between models (e.g., data from AT2017gfo agree with both two-and three-component ejecta models as shown in Cowperthwaite et al. 2017).The lack of both photometric and spectroscopic data poses a limitation, compelling us to address the challenge of constraining models solely based on photometric data.This situation highlights the opportunity to enhance observing strategies to optimize the chances of verifying theoretical predictions concerning KN events.By improving observational techniques and data collection, we can better assess and validate the theoretical models of these events.
Detection Rates in O4
Rubin Observatory's primary survey, named Wide Fast Deep (WFD), covers an area of 18,000 deg 2 through its "universal cadence," while approximately 10% of the observing time is reserved for other programs, including intensive observation of Deep Drilling Fields (DDFs).Compared to typical points on the sky, the DDFs will receive deeper coverage and more frequent temporal sampling in at least some of the LSST camera's ugrizy filters.
To estimate the detection rate of EM counterparts potentially detectable by LSST, we constructed a population of KNe and GRB afterglow light curves, starting from a realistic population of BNSs, following the method described in Colombo et al. (2022Colombo et al. ( , 2023)), summarized in Appendix A. We considered two sets of limiting magnitudes: one shallow and related to the expected depth of visits of LSST for WFD and one deep related to DDFs.
Assuming a network of LIGO, Virgo, and KAGRA detectors with the projected O4 sensitivities and a 70% uncorrelated duty cycle for each detector, we find a GW detection rate of 7.4 5.5 11.3 -+ yr −1 , 77% of which can produce a KN and 53% a relativistic jet.The shallow limiting magnitudes for WFD are sufficient to the detect the majority of KNe in the y and z bands and all the KNe in the i, r, g, and u bands, with a corresponding detection rate of 5.7 4.2 8.7 -+ yr −1 .In contrast, the fraction of events with a detectable otpical GRB afterglow is between 2% and 5%, with a maximum detection rate of 0.37 0.27 0.56 -+ yr −1 .This is due to the large abundance of off-axis jets within the estimated GW horizon, with a corresponding faintness of these counterparts.In Table 1 we report all the detection rates and the assumed limiting magnitudes.
Method
Motivated by the discovery of AT2017gfo and its luminous blue emission, several attempts were made to find similar cases in archival short-GRB observations (e.g., Troja et al. 2019;Rossi et al. 2020).For instance, Troja et al. (2019) found that some nearby events have optical luminosities comparable to AT2017gfo.In particular, they showed that the sGRB 150101B was a likely analog to GW170817, characterized by a latepeaking afterglow and a luminous optical KN emission, dominating at early times.This finding suggests that KNe similar to AT2017gfo could have been detectable in the optical spectrum even though they might not have been explicitly identified before the discovery of GW170817.Driven by this motive, we aim to study the performance of the KNe parameter estimation to better understand the physical properties of the KN ejecta and their evolution using LSST observational strategies.The work is divided into three major parts: 1. simulation of a sample of KN + GRB light curves (Section 3.1); 2. simulation of the observed light curves using a realistic cadence strategy (Section 4); 3. estimation of the parameters' variance using a Bayesian fitting algorithm to retrieve the posterior distribution (see Section 5).
The parameter estimation of a transient light curve is something that is usually done after the follow-up, so the ansatz here are that we already took care about distance estimation, contaminants, and candidate selections, thus we are only interested in analyzing how the search design impacts the parameter estimation.
Light-curve Simulations
Considering the combinations of KN and GRB events, simulated light curves allow us to build up a science case for KNe that are well localized and that have a constrained estimate for the distance.To approach the complexity of the KN models (Pang et al. 2020) we use nuclear-multimessengerastronomy algorithm, nmma13 (Pang et al. 2023).The software gives us the possibility to generate a distribution of realistic ejecta masses described by a population of BNS mergers, using the procedure from Dietrich et al. (2020), where they developed a framework to combine multiple constraints on the masses and radii of NSs, including data from GWs, EM observations, and theoretical nuclear physics calculations.The simulations to derive the prior distribution of the ejecta mass employ quasiequilibrium circular initial data in the constant rotational velocity approach, i.e., they are consistent with Einstein equations and in hydrodynamical equilibrium.The model assumes the SFHo (Steiner-Fischer-Hempel baseline model in Steiner et al. 2013) equation of state (EOS), which satisfies the current astrophysical constraints (e.g., Miller et al. 2019).
In our study, we employed the KN model introduced by Perego et al. (2017).This model takes into account the radiation produced by two distinct components: dynamical ejecta and disk ejecta.The disk ejecta can be further divided into two parts.The first part is wind ejecta (Ruffert et al. 1997;Kiuchi et al. 2015;Fernández et al. 2017), which is propelled in directions close to the polar axis by the neutrino flux originating from the hotter regions of the disk during the neutrino-dominated phase.The second part of the disk ejecta is known as secular ejecta (Fernández & Metzger 2013;Radice et al. 2018) and arises from viscous angular momentum transport.
By analyzing the distribution of ejecta masses, we were able to derive the distribution of ejecta velocities.This velocity distribution is influenced by the explosion energy, and for more details on this aspect, one can refer to Metzger (2017).
To take into account all the possible parameter correlations we construct distribution of priors for the KN parameters shown in Figure 1.The distribution of the model parameters we injected in nmma to simulate KN light curves, the ejecta mass, and the velocity distribution is modeled by the binary mass distribution in Dietrich et al. (2020); the open angle for the wind ejecta component is drawn from a uniform distribution ( ) 6, 4 p p .Then drawing randomly from these distributions we characterize the entire sample of simulated sources.
To create a science case to frame the experiment we made some assumptions: 1. we have the information on the distance to the event thanks to a GW trigger; 2. we have the information on the energy of the explosion because we detect a correlated GRB; 3. if there is an associated GRB we can assume the localization of the KN as known.We combine the simulated population with a single afterglow model for each viewing angle, so that the differences in the resulting light curves are due to the KN contribution (the model parameters for the afterglow and the KN are listed in Tables 2 and 3).This choice is driven by the possibility to detect KNe as flux excess in the afterglow evolution (Rossi et al. 2020), and because the frequency of observed afterglows exceeds the rate of KNe (≈5 KNe yr −1 versus ≈100 afterglows yr −1 ; see LSST Collaboration et al. 2009, as reference for the reported rate of afterglows) we expect to recognize KNe using well sampled afterglow light curves.All sources are simulated assuming three reference distances along the line of sight: 42, 100, and 300 Mpc.Eventually, the effect of the viewing angle cannot be neglected, thus we consider also three reference viewing angles of 0, π/4, and π/2 rad.The particular choice was made so that the ability to retrieve the physical parameters from the light curves would not be influenced by effects on the light curve due to the distance (e.g., selection effects due to the limiting depth of the survey or Malmquist bias) or viewing angle.The lower distance correspond to the distance value of AT2017gfo as a reference, while the median and higher distances were set based on considerations about the detectable sGRB rate; indeed Dichiara et al. (2020) -sGRBs within 200 Mpc are detectable.Scaling this value to greater distances, we estimated N sGRB (d L < 350 Mpc) ≈ 1.2 yr −1 , considering the lower bound of the uncertainty range; hence we set the higher reference distance to 300 Mpc.
Kilonova Model
nmma uses fitting formulae based on numerical simulation of the merger and post-merger dynamics to compute the ejecta properties (Radice et al. 2018;Krüger & Foucart 2020) as a function of the binary parameters (namely the component masses and the EOS).The procedure used is presented in Dietrich et al. (2020), where they survey 5000 EOSs that provide possible descriptions of the structure of NSs, recovering those that reproduce astrophysical constraints, such as NS maximum mass.For more details see the Supplementary Material of the referenced paper.
We then evaluate the accretion disk mass using the fitting formula from Barbieri et al. (2020), whose predictions are consistent with numerical simulations of both symmetric and The computation is based on a semianalytical model in which axisymmetry relative to the direction of the binary angular momentum is assumed.The ejecta, assumed to be in homologous expansion, are divided into polar angle bins, and thermal emission at the photosphere of each angular bin along radial rays is computed following Grossman et al. (2014) and Martin et al. (2015), taking into account the projection of the photosphere in each bin.See Table 2 for the distribution of reference parameters for the light-curve simulations.
Afterglow Model
GRBs associated with gravitational-wave events are, and will likely continue to be, viewed at a larger inclination than GRBs without detections of gravitational waves.As demonstrated by the afterglow of GW170817A, this requires an extension of the common GRB afterglow models, which typically assume emission from an on-axis top-hat jet.We used the Python package afterglowpy (Ryan et al. 2020), which characterizes the afterglows arising from structured jets, providing a framework covering both successful and choked jets.The temporal slope before the jet break is found to be a simple function of the ratio between the viewing angle and effective opening angle of the jet.
To accommodate an initial structure profile E(θ) in afterglowpy we consider the flux as a function of the polar angle θ.This assumes that each annulus of constant θ evolves independently, as an equivalent top hat of initial width θ j = θ.This is a very good approximation when transverse velocities are low: when the jet is ultrarelativistic and has not begun to spread, and when the jet is nonrelativistic and the spreading has ceased (van Eerten et al. 2010).The model allows for several angular structures of the GRB jet.In our exercise we use one GRB model for the afterglow since we are interested in the ability to infer the KN parameters.The parameters used for the GRB model are shown in Table 3, and we assumed a Gaussian jet structure so as not to correlate effects on the viewing angle or beam direction with effects coming from the peculiar jetenvironment interaction due to the particular geometry.The parameters are set to produce simulations that are as realistic as possible; for this reason we assumed the parameters from the Notes.The luminosity distance , D L , is needed to generate the model, as well as the angle for the polar emission of the wind ejecta, θ w , the ejecta velocity, v ej , the ejecta masses, M ej,dym and M ej,wind , the extinction, Ebv, and the exponent β of the relation , where M is the total mass, v is velocity of the mass envelope, and v 0 is the average minimum velocity of the ejecta.The last equation is used to reproduce the structure of the matter within the moving ejecta (see Perego et al. 2017, for details of the KN model).(2014).Thus the reference afterglow template can be considered representative of the short-GRB population.We also considered different reference values for short GRBs from Fong et al. (2015), but different assumptions appear not to dramatically impact the results, thus we considered the values in Table 3.
MAF and OpSim
The comprehensive discussion of the software made available by the Rubin Observatory for community contribution to the survey design is not within the scope of this paper.Interested readers are referred to the opening paper (Bianco et al. 2021) and its references for a full examination of the software's workings (Delgado et al. 2014;Delgado & Reuter 2016;Yoachim et al. 2016;Naghib et al. 2019) and for more details and information on this topic.
The Operations-Simulator software (OpSim14 ; Delgado et al. 2014) generates a simulated strategy based on a set of criteria, such as total number of images per field per filter, including simulated weather, telescope downtimes, and other occasional interruptions.The survey requirements (survey strategy) are the input to an OpSim run and the output is a database of observations with associated attributes (e.g., image 5σ depth) that specify a succession of simulated observations for the 10 yr survey.Since its creation, the Rubin OpSim has gone through various revisions, the main differences in which are the methods used to optimize the pointing sequences and filters to achieve the desired survey features (Bianco et al. 2021).
The Metric Analysis Framework (MAF15 ) API is a software package created by the Rubin Observatory (Jones et al. 2014) to evaluate how various simulated Legacy Survey of Space and Time (LSST) observation strategies impact different specific science goals.The MAF has been made public upon its creation to facilitate community input in the strategy design and it enables interaction with OpSim primarily by SQL, allowing the user to select filters or time ranges (e.g., the first year of the survey).Further, the choice of slicers allows the user to group observations.For example, one may "slice" the survey by equal-area spatial regions, using the HEALPIX scheme of Górski et al. (2005).Throughout, we choose a Healpix-elSlicer with resolution parameter NSIDE = 16, corresponding to a pixel area of 13.4 deg 2 (and thus the choice that most closely matches the size of the Rubin LSST field of view; see Ivezić et al. 2019, for reference to Rubin LSST characteristics).Thus, to pass from the simulated theoretical light curves produced according the procedure described in Section 3 to the simulated observed light curves we used the MAF.In this way we are able to apply parameter estimation tools to the simulated observation to analyze the impact of the observing strategies on the ability to retrieve parameters injected in nmma to produce the theoretical simulations (more details in Section 7).
Impacts of Observing Strategy on Parameter Estimation
The ability to populate light curves is very limited, as the number of filters and the number of detections in each filter vary depending on the intrinsic properties of the event (see Figure 2).This could impact our capability to infer the KN parameters.To evaluate the performance of nmma in estimating the KN parameters and to reproduce the injected light curve, we used the posterior's variance as a metric for the performance of the fitting procedure.We analyze three features that typically impact the performance of a sampler: 1. the number of detected points on the light curve; 2. the number of available filters; 3. the peak magnitude of the light curve.
In the upper left panel of Figure 3 we analyzed the lightcurve sampling by changing the time resolution of the template.
When we refer to the general trend, the figure shows that for light-curve data spaced more than ≈6 hr apart, dynamical ejecta is more poorly constrained than the other parameters, with the exception of two cases.However, in those two cases the values of ejecta mass and velocity are closer to the others, suggesting as filled colored points we used the baseline_ v2.0_10yrs strategy design.The triangles represent nondetections.The way the strategy plans to look at the footprint implies that we will be able to detect the events but we will not have the same ability to characterize because we will lose information about the color and the morphology of the light curve.Indeed, we simulate the light curve in all the bands but only the z-band appears to be detectable in this region and at the survey time.
that the effect could be related to a small fluctuation around the best-fit parameter configuration.This is interpreted as an effect of the fitting procedure: because of the time gap between the detections we miss the possibility of getting the maximum.However, constraining the rise and the fall of the light curve would help in constraining the KN model parameters.Specifically, the sampler cannot constrain the global minimum of the cost function (in our case the likelihood of the detections, see Appendix B) in the M dyn -v dyn -M disk -θ w hypercube when the light curve is not populated.This is because there is a large degeneracy of states that reproduce the same collection of fluxes (top panels in Figure 4).When the sampler can constrain the values of the parameters in the hypercube, the performance appear to be better (bottom panels in Figure 4).This happens for the other panels too; however, when analyzing the performance of the sampler with respect the number of filters (upper right panel in Figure 3) we see that this is not a very important observational feature in constraining the posterior's variance.For the experiment shown in upper right panel of Figure 3 we set the detection cadence at 9 hr.
Eventually the peak magnitudes have a similar impact to the upper left panel on the performance as shown in the bottom panel of Figure 3. Thus, this behavior can be interpreted similarly to the case shown in the upper left panel, considering that the closer to the limiting magnitude the peak is, the fewer detections of the light-curve features we get.The main problem is that the degeneracy in the parameter space produces a different minimum in the cost function because of the systematics we analyzed here.Thus, the area surveyed to find the global minimum is larger.Eventually we are able to infer the values of the injected parameter within some confidence level with the drawback of losing precision.
Impact of the Model's Description on Parameter Estimation
The assumptions on the radiation transport and on the nuclear network, which are at the foundation of a model that attempts to describe the observed event, can influence the ability of the cost function to find the global minimum of the parameter space that gives the best configuration of the parameters to describe them.Due to this connection between the model and the cost function, we analyze whether the behavior we highlighted in the previous paragraph and in Figure 3. Analysis of the fitting performance for a single light curve.Each panel shows the logarithm of the posterior's variance as a function of (i) the time gap between two consecutive points on the light curve (upper left panel), (ii) the number of available filters (upper right panel), and (iii) the maximum magnitude of the light curve (bottom panel).The plot can be read as follows: lower values on the y-axis represent better performances.Interpreting the plots, we find that the most information is gained when we populate the light curve with points that are very close in time, and the event is bright (upper left and bottom panels).There is a net improvement in the performance when considering all the filters together; however, in the other cases, the difference in performance is negligible.For each event, the signal-to-noise ratio (S/N) of the GW strain has been evaluated for the LVK detectors.For events above the S/N threshold in the population, M ej and v ej are estimated using Equations ( 18) and ( 22) in Radice et al. (2018) and considering the SFHo EOS: Taking into account that both dynamical and disk ejecta contribute to the mass of the ejecta, the total m ej is estimated as m ej = M dyn + M disk .We aim to analyze the impact of observations on constraining the mass ratio, thus with X = [m ej , v ej ] we estimated the uncertainties on the KN parameters: where we consider the uncertainties on the two merging NS masses to be equal in the last equation.We assume the uncertainties on the masses as Finally, we obtain the following: 1 -which is the ratio between the variance squared of the mass ratio and the ejecta mass or velocity-can be interpreted as the sensitivity to the system's photometric observation.Table 4 shows the form for ( ) A M q X 1 for q < 1 and q ≈ 1 related to ejecta mass or ejecta velocity estimations (see Figure 5. The inference of model parameters from a physical model assumes that the value inferred represents the value from the event's underlying model.However, due to oversimplifications in the theoretical treatment or technological limitation, this is not always true. Figure 5 shows how the model impacts on the parameter values (i.e., ejecta mass and ejecta velocity) when we try to infer them assuming we can measure other observables-the total binary mass and the mass ratio.If the physical model used to describe the event is highly degenerate for those parameters, we cannot distinguish between sets of parameters that produce the same set of observables, i.e., the uncertainties on the parameters are so high that the range of possible inferences related to that measure is very broad.Our case shows this is the case when we infer the ejecta velocity.Indeed, Figure 5(b) shows that if we survey the parameter space following v ej the fact that we have high uncertainty translates into a broader region of local minima in the cost function.Thus, with every inference we make to look for the global minimum we are likely to end up in a very similar state to where we started; this is because the uncertainty distribution is almost uniform, which means that whatever the true ejecta velocity is the chance of being in any other region close to that value is the same, meaning that we are likely to miss the global minimum.Conversely, Figure 5(a) shows the case in which we survey the parameter space in the direction of the ejecta mass m ej .This direction of the parameter space appears to be very helpful in constraining the specific parameter, showing that the uncertainty on the inferred parameter can change by an order of magnitude.Hance, the possibility of matching the global minimum can be higher if we survey the m ej direction of the hypercube.Eventually, we can conclude that we expect to have a much better constrained inference for ejecta mass than for ejecta velocity.
Observing Constraints from the Simulations
In Section 3 we described how we simulated light curves, with a combination of KN and afterglow emissions from the same source.These simulations are then used as reference templates to produce mock observed KN light curves during the operation time of LSST.To tackle the problem, we associate to the templates of KN + afterglow light curves an explosion time, uniformly chosen, within the 10 years of the survey over the whole observed sky.Eventually, the ability to discover fast and faint transients, such those we simulated, largely depends on the area observed-which in our experiment is the field of view of the pointings-, the depth of those observations, the cadence, and the filters adopted by the survey (all the simulated survey strategies are listed in Table 5).
Using the baseline as a test for our machinery, we applied the observational constraints from this OpSim to simulate what the observed light curves look like.We simulated for each reference distance and viewing angle a set of 100,000 KNe + afterglow events, with a total of 900,000 sources.
Figure 6 shows that the contours of the detectability regions change dramatically for further events, and that the best filters to follow up KN + afterglow events are g + z bands, due to the possibility to follow the events for more time up to 100 Mpc.Whereas optical filters are seen to perform well on the bluer side of the spectrum with closer events, the relative importance of bluer and infrared filters appears to be unchanged as the source's distance increases.Analyzing the detected eventsdefined as all the events with a light curve that has more than two detected observations in any filter-we find that as the source is more distant, events will be observable for a duration that is almost equal to that between two consecutive observations.Light curves with this duration above the S/N will have just two detected points at the specific wavelength, thus a multiwavelength analysis is mandatory to be able to use these data, otherwise no parameter estimation will be possible.Figure 6 does not change dramatically under the assumption of different viewing angles, thus for all the cases this figure appears to be a good reference for the description of the results.
Because of the nominal limiting magnitudes (LSST Collaboration et al. 2009) we have a very small time window in which to catch farther KNe with a maximum duration of ∼6 days in NIR bands for the simulated sources.The nonuniformity of the filter coverage through the entire survey impacts on the possibility of characterizing the explosions.Moreover, even though simulations are produced in all the six LSST bands ugrizy, light curves simulated in Figure 2 show only the filter that produced a detection for the specific event in that time window (MJD 60000-62000).Late-time evolution of the light curve will not be detectable for sources close to 300 Mpc; thus, to be able to constrain model parameters a higher priority would rather be given to closer sources if and when detected, because they will be characterized by a well populated light curve.The drawback that has to be stressed is the very small time window in which the light curve is detectable, which for closer sources (i.e., 42 Mpc in our simulations) is from 5 to 10 days and for farther sources (i.e., 300 Mpc in our simulations) from 1 to 5 days from the explosion.For a simulated event at 300 Mpc, the main features that are affected are the peak magnitude and the duration of the event above the limiting magnitude.This is because we get less flux from farther events and thus we reach the limiting magnitude of the survey earlier when observing the light curve's evolution.Similarly, this impacts on the fall rate distribution, which cannot be accurately measured in all the cases because we lose information on the late-time morphology of the light curve.
Constraints on Kilonova Model
Follow-up of the events in one or more filters will allow us to infer the model's parameters within the 3σ uncertainties.We perform the fit using dynesty (Koposov et al. 2022), a Python package to estimate Bayesian posteriors and evidences (marginal likelihoods) using dynamic nested sampling methods.By adaptively allocating samples based on posterior structure, dynamic nested sampling has the benefits of Markov Chain Monte Carlo (MCMC) algorithms that focus exclusively Table 4 The Form for ( ) A M q X 1 for q < 1 and q ≈ 1 Related to Ejecta Mass or Ejecta Velocity Estimations Notes.For an NS the assumption M M 0 i i * -» has been considered.The tidal deformability parameter, L, is modeled from Chatziioannou (2020).A relation between these variables can be found in Timmes et al. (1996): on posterior estimation while retaining nested samplingʼs ability to estimate evidence and sample from complex, multimodal distributions.Nested sampling is a method for estimating Bayesian evidence that was first proposed and developed by Skilling (2006).The basic idea is to approximate the evidence by integrating the prior in nested "shells" of constant likelihood.Unlike MCMC methods, which can only generate samples proportional to the posterior, nested sampling simultaneously estimates both the evidence and the posterior (see Appendix B). Figure 5 shows that the model itself acts as a source of uncertainty on the estimation of KN parameters, with a greater ratio of uncertainties on the ejecta mass and velocity as the total mass of the progenitor system grows.The behavior of the curve suggests that the uncertainties of the parameters tend to be lower when the system is more massive.However, when we consider a fixed mass for the progenitor system we measure smaller uncertainties on the ejecta mass as the mass ratio increases, while we note the opposite behavior for the ejecta velocity.The trend of the uncertainties with respect to the total mass of the system implies that there is high degeneracy on the value of the parameters when looking for the best configuration to replicate the observed light curves.This is because the model reproduces similar light curves for all the combination of parameters within the uncertainties' range.Figure 7, indeed, shows that for the majority of the events the uncertainty on all parameters is 100%.However, for simulated events with higher ejecta mass and smaller velocities, there is a tail of ≈10% of events with better constrained parameters.This is important because to constrain the EOS models we need very accurate measurement of KN parameters, so the 30% accuracy, even if good, is not sufficient; thus a dedicated target of opportunity (ToO) appears to be a necessity to follow up KNe from an external trigger if the baseline is the chosen strategy for the survey.To support this we compare, among the OpSims, the mean number of detection per filter (i.e., the higher this number, the better the accuracy; see Figure 8).
The results show that the OpSims that allow revisits within the same night have higher a fraction of well populated light curves, which implies a greater ability to constrain the uncertainties of the model parameters.The takeaway message from this work is that from survey observations we can expect to improve our detection ability, because we observe deeper and wider, changing the configuration of filters from time to time.However, to efficiently constrain the model's parameters we need to maximize the information content of our observations, improving the number of filters and the number of detections we have to observe the evolution of the event.
Discussion and Conclusions
This work aims to understand whether the LSST observing strategy can help collect data that will improve our understanding of KN sources.Sagués Carracedo et al. (2021) analyzed how to optimize the strategy for the distribution of filters and survey depth to boost the detection efficiency for these faint and fast-evolving transients.They explored the dependence on the mass of the ejecta, the geometry, the viewing angle, the wavelength coverage, and the source distance.Eventually they claim that the detection efficiency has a strong dependence on viewing angle, especially for filters blueward of the i band.This loss of sensitivity can be mitigated by early, deep observations.Efficient searches for the gri counterpart of KNe at ∼200 Mpc would require reaching a limiting magnitude mag 23 lim = mag within 5 days from explosion, to ensure good sensitivity over a wide range of the model phase space.Toward this end, Andreoni et al. (2022) analyze different choices of filter setting and exposure time, and they find that observations in redder izy bands are crucial for identification of nearby (within 300 Mpc) KNe that could be spectroscopically classified more easily than more distant sources.LSSTʼs potential for serendipitous KN discovery could be improved by increasing the efficiency with the use of individual 30 s exposures (as opposed to 2 × 15 s snap pairs), with the addition of red-band observations coupled with samenight observations in the g or r bands, and possibly with the further development of a new rolling-cadence strategy.However, even if detected, the KNe are not sampled enough to allow parameter estimation without ancillary data.
This work showed how to constrain the parameters of the KNe model using data from the LSST-Vera Rubin Observatory.This new facility is expected to push forward our knowledge of the physics of compact objects and improve the statistics of unique transient events such as KNe (Andreoni & Kool 2020b).However, to be able to deeply comprehend the compact objects' EOS and thermalization processes (Korobkin et al. 2012;Barnes et al. 2016), together with the energy-dependent photon opacities in r-process matter (Even et al. 2020;Tanaka et al. 2020), it is essential to constrain the uncertainties originating from various assumptions in the modeling.This is due to the complexity of the underlying physics, which is affected by diverse interactions and scales (see Metzger 2017, and references therein).
The possibility to extract information about the source of a KN event depends on a number of assumptions, including: 1. the KN model; 2. the available filters for the observation; 3. the distance; 4. the time window in which the event is above the observation limit.
The search for KNe light curves can be achieved through discovery of a transient during searches on the entire probability distribution map from the GW trigger or as a targeted search in a small number of specific and very limited regions of the sky.Below we refer to those two scenarios as "All-sky search" and "Targeted search." We analyzed the possibility to constrain the ejecta mass, velocity, and opacity from the photometric multiwavelength search for KN events assuming we know the distance and position of the KN from other messengers (i.e., GW, GRB).
There is a tradeoff between the fitting performance and the features of the observed light curves.In order to be able to extract precise fits of a model's parameters some conditions of measurability need to be satisfied.They can be summarized as follows: 1. recognize the different phases of the light-curve evolution; 2. gather observations in at least three filters.
The criteria listed above can be translated to survey strategy design: more revisits (with the best strategy considering longgap pairs of about 2 hr every two nights) in the same or different filters will weight differently in the observing strategy in the two scenarios (All-sky search and Targeted search).We use as a proxy of the performance the average number of detections per filter (see Appendix C for details on the figure of merit, FoM).We then normalize the FoM with respect to the FoM for the baseline, N balseline = 7 for both "All-sky search" and "Targeted search." For targeted searches, the possibility of considering at least one of the two criteria performs better with respect to the baseline, as is visible in Figure 9. Eventually the configuration of a ToO in the main strategy will improve our ability to extract precise information using only the photometric light curve to constrain the source's ejecta parameters.
As shown in the middle panel of Figure 9, when a targeted search is considered presto_gap (IDs = 63, 68, 74) and long_gaps_np (IDs = 5,12,19,37,40) OpSims almost double the performance of the baseline in constraining the model parameters for farther sources; this is due to the color information these strategies allow one to obtain.Indeed, the presto_gap_half adds a third visit within the same night for half the nights of the survey, with variations on the time interval between the first pair of visits (standard separation of 33 minutes) and the third visit.Among this family the best strategy is presto gap3.5, which consider triples spaced 3.5 hr apart (g + r, r + i, i + z are the initial pairs).long_gaps_np similarly extends the gap between the pair of visits, modifying it to a variable time period of between 2 and 7 hr.The pair of visits are both in the same filter, in any of griz (g + r, r + i, or i + z pairs).In some of the simulations, these long-gap visits are obtained throughout the survey, while for other simulations the longer time separations do not start until year 5.Among this family of OpSims the best performing strategy is long_gaps_nightsoff0, which considers long-gap pairs every night.When we analyze farther sources it appears that having long-gap pairs every 4-7 nights allows one to better constrain the model parameters, with long_gaps_np_ nightsoff7 being the best performing OpSim for this family in the case of a source at 300 Mpc.Overall the best performing OpSim is vary_gp_gpfrac0.30(ID = 104).The vary_gp family is a set of simulations that investigate the effect of varying the amount of survey time spent on covering the background (non-WFD-level) Galactic plane area.The combination of image quality, set of filters per observation, and cadence allow the best coverage for our simulated light curves.
Across the different panels in Figure 9 the difference in performances when the population of events is considered at different viewing angles (from top to bottom, / / 0, 4, 2 v q p p = ) can be analyzed.The general discussion still holds; however, we see that a worsening of the performance is evident.The long_gaps_np family appear to outshine the baseline in all cases.This indicates the importance of revisiting in the same night with different filter pairs to have a well constrained characterization of the events.
When an All-sky search (i.e., a search on the entire probability distribution map from the GW trigger) is considered, the main criterion for a better performance appears to be the homogeneity of the filter coverage (see Appendix C), meaning that strategies that respect the criteria in a higher number of regions perform better than the baseline.From .Comparison plot of all the v2.0 OpSims for an All-sky search, the performance is normalized with respect to the baseline.The metric counts the median number of detections per filter, which is used as a proxy to evaluate the strategy that will allow the most accurate parameter estimation, as described in Section 5. OpSim indexes shown on the x-axis are described in Table 5.
Figure 9.
Comparison plot of all the v2.0 OpSims for a targeted search in a fixed pointing in the sky.The performance is normalized with respect to the baseline, N balseline = 7.Each plot considers a population of KN + afterglow simulated with a fixed viewing angle; from top to bottom the viewing angle is / / 0, 4, 2 v q p p = .The metric counts the median number of detections per filter, which is used as a proxy to evaluate the strategy that will allow the most accurate parameter estimation, as described in Section 5. OpSim indexes shown on the x-axis are described in Table 5.
the results (see Figure 8) it is shown that the baseline performs better than almost all the OpSims; those with better performance differ from this strategy in varying the exposure time or the number of images per exposure to force the limiting magnitude to be homogeneous over the whole sky (see the vary family details, LSST Collaboration et al. 2009). 16ndeed, the best performing OpSim is multi_short, which takes four short (5 s) visits per filter in a row, and it stops after 12 short visits per filter in a year and it achieves ∼700 visits per pointing.
In short, the baseline is a great compromise among all the strategies for KNe science, and in future an improvement of the ability to constrain parameters of serendipitously discovered KN events is also foreseen.However, small changes of this strategy oriented to adding a third image for color information within the 4 hr gap or ad hoc ToO strategies to follow up the evolution of the light curve will enhance by a factor of 2 the ability of the targeted search to describe the KNe events with the most reliable KN models known up to now.
Figure 2 .
Figure 2.An example of a single template (thick lines) observed at (R.A., decl.)=(197.45,−23.38) (the position of NGC 4993, the host galaxy of AT2017gfo), at the three reference distances [42, 100, 300] Mpc during three time-windows through the 10 years of the survey.To reproduce the observed detections shown in the panels as filled colored points we used the baseline_ v2.0_10yrs strategy design.The triangles represent nondetections.The way the strategy plans to look at the footprint implies that we will be able to detect the events but we will not have the same ability to characterize because we will lose information about the color and the morphology of the light curve.Indeed, we simulate the light curve in all the bands but only the z-band appears to be detectable in this region and at the survey time.
Figure 5
Figure 5 is somehow related to the model used in the fitting procedure.We consider a BNS population fromDietrich et al. (2020) as pointed out in Section 3 and fit to currently available observational constraints from both GW-detected and Galactic BNS binaries as described in Appendix A inColombo et al. (2022).The merger rate is obtained by convolving the delay time distribution (represented as the time gap between the formation of the binary system and its merger) with the cosmic star formation rate(Madau & Dickinson 2014) normalized to the local rate density R 347 Gpc yr 0 256 536 3 1
Figure 4 .
Figure 4. Corner plots with an example for a worst-case scenario (top row) and best-case scenario (bottom row) when applying the fitting procedure.
Figure 5 .
Figure5.The sensitivity plot described in detail in Section 6.The panels show the sensitivity of the parameters to the variation of the BNS represented by the primary mass and the mass ratio, q.
Figure 6 .
Figure6.The distribution of the features we can extract from the observed light curves.Each panel represents the distribution of two features: peak magnitude and duration of the light curve above the limiting magnitude; different lines represent a distance at which the event is simulated.Details in Section 7.
Figure 7 .
Figure 7. Distribution of the uncertainties of the model parameters from the fit.Top panels represent the PDF of the parameters' relative error; bottom panels represent the cumulative density function (CDF) of the same variable.
Figure 8
Figure8.Comparison plot of all the v2.0 OpSims for an All-sky search, the performance is normalized with respect to the baseline.The metric counts the median number of detections per filter, which is used as a proxy to evaluate the strategy that will allow the most accurate parameter estimation, as described in Section 5. OpSim indexes shown on the x-axis are described in Table5.
Table 1
Estimation of the Expected Observation Rate of KNe in Association with a GW and GRB Afterglow Notes.Details of how the single values in the table have been estimated are given in Appendix A. Below each rate, we report in parentheses the fraction of the total O4 BNS GW rate (HLVK O4).The GW detection limits refer to the S/N net threshold.Limiting magnitudes of LSST filters are in the AB system(LSST Collaboration et al. 2009); detection rates are in units per year.The reported errors, given at the 90% credible level, stem from the uncertainty of the overall merger rate, while systematic errors are not included.These results and the underlying methodology are described in Appendix A.
Figure 1.The distribution of the model parameters we injected in nmma to simulate KN light curves, the ejecta mass and the velocity distribution is modeled by the binary mass distribution in Dietrich et al. (2020); the open angle for the wind ejecta component is drawn from a uniform
Table 2
The KN Parameters and Their Probability Density Functions (PDFs) Used to Create Simulated Light Curves for the KN Components with nmma
Table 3
van Eerten et al. (2010), D'Avanzo et al. (2014that Were Used to Create Simulated Light Curves for the GRBs The luminosity distance, D L , is needed to generate the model, as well as the viewing angle, θ v , the half-opening angle, f c , the outer truncation angle, θ w , the isotropic-equivalent energy, E 0 , the circumburst density, n 0 , the electron energy distribution index, p, and the fraction of energy imparted both to the electrons, ò e , and to the magnetic field, ò B , by the shock.usualvaluesfrom both observed and modeled afterglow populations as expressed invan Eerten et al. (2010), D'Avanzo et al. (2014), and van Eerten (2018).However specific, the parameters selected for the reference afterglow template are the median values of an observed short-GRB population taken as reference from D'Avanzo et al.
Table 5
Opsim's Names and IDs, as Plotted in Figures9 and 8
|
v3-fos-license
|
2018-12-14T03:11:08.840Z
|
2013-06-01T00:00:00.000
|
55047585
|
{
"extfieldsofstudy": [
"Geography"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scielo.cl/pdf/rchnat/v86n2/art01.pdf",
"pdf_hash": "9ce78b7620caf960985c9ab980c6f0666a13bdcf",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:716",
"s2fieldsofstudy": [],
"sha1": "9ce78b7620caf960985c9ab980c6f0666a13bdcf",
"year": 2013
}
|
pes2o/s2orc
|
A hybrid zone of two toad sister species , Rhinella atacamensis and R . arunco ( Anura : Bufonidae ) , defi ned by a consistent altitudinal segregation in watersheds
Delimiting the spatial extension of a hybrid zone is essential to understand its historical origin and to identify the geographical and/or environmental factors which delimit it. Rhinella atacamensis and R. arunco are two sister species which together inhabit Chile between 25° and 38° S. Their distribution limits coincide at about 32° S, where recently it was reported that they hybridize in a small watershed (Pupío creek). Although the genetic evidence suggests that these two species form a hybrid swarm, they are not mixed homogeneously in the entire watershed, but rather are spatially segregated: R. arunco is found in the lower part of the creek and R. atacamensis in the higher part. An extensive exploration north and south of 32° S revealed other instances of hybridization, with the same pattern of spatial segregation within other watersheds. This study describes the hybrid zone combining mitochondrial sequences and nuclear AFLP markers. In the northern part, the hybrid zone is a narrow strip which crosses several watersheds and extends more than 130 km from NW-SE, so that R. atacamensis is found at higher altitudes towards the south. However, two points south of this strip show that the hybrid zone is more extensive and complex, and probably extends along the entire border of the mountain chains which form the watershed of the Aconcagua River (32°30’ 33° S). We propose an explanation for the origin of this hybrid zone considering paleoclimatic and orographic information, and briefl y discuss the taxonomic implications of these results.
INTRODUCTION
Hybrid zones (relatively narrow areas in which two genetically different populations, whether of the same or dif ferent species, meet and produce hybrids) are highly idiosyncratic phenomena, since they involve diverse evolutionar y and ecological processes which interact at different spatial and temporal scales (Barton & Hewitt 1985, Arnold 1997, Howard et al. 2004).Examination of historical and current distributions of various taxa which co-exist over wide geographic areas has led to suggest that hybridization has been important in the diversifi cation and current composition of those biotas (Swenson 2010, Nieto Feliner 2011).More restricted studies have demonstrated that this phenomenon is frequent in a great variety of taxonomic groups (e.g., Willis et al. 2006, Grant & Grant 2008, Lepais et al. 2009, Fontenot et al. 2011).Although these examples can be considered as exceptional since they are from well-known taxonomic groups or regional biota, they show that natural hybridization is a common and widely distributed phenomenon geographically and taxonomically.
In order to identify the processes involved in the origin and persistence of hybrid zones it is essential to defi ne their spatial extension, which can now be done with high precision using molecular markers.A detailed study of the geography of these zones may contribute to elucidate the historical factors or population events which originated them (e.g., Hofman et al. 2007, Hird et al. 2010, Edwards et al. 2011), and to identify the current physical and environmental conditions which determine their location and extension (e.g., Buckley et al. 2003, Yanchukov et al. 2006, Shields et al. 2010, Hapke et al. 2011).
This study illustrates how an extensive e x p l o r a t i o n a n d t h e c o m b i n e d u s e o f mitochondrial and nuclear markers may reveal the complex geography of a hybrid zone, and how its location and confi guration allow the generation of an explanation for its origin.The taxa involved are two sister species of toads, Rhinella atacamensis (Cei, 1962) and R. arunco (Molina, 1782) (Méndez 2000, Pramuk 2006), endemic to nor th-central Chile, whose joint distribution ranges extend from approximately 25º S to 38º S (Cei 1962, Veloso 2006, Correa et al. 2008).Both species are nocturnal and can be found in borders of rivers, creeks, lagoons and other water bodies, including canals, dams and other man made environments (Cei 1962).Until recently, it had been assumed that these two species had allopatric distributions, with their limits located around of 32° S.However, populations of the Pupío watershed (a small creek located at 32° S) were recently described as a possible hybrid swarm of R. atacamensis x R. arunco (Correa et al. 2012).One of the main fi ndings of that study was that a spatial segregation exists within the Pupío watershed: R. atacamensis occupies the higher part of the watershed, whereas R. arunco inhabits the lower part.At mid elevations, they hybridize.
The objective of this study was to defi ne the spatial extension of the contact zone of R. atacamensis and R. arunco, for which we per formed an intensive fieldwork nor th and south of the Pupío watershed.Using a combination of mitochondrial (sequences of the control region) and nuclear (AFLP) markers we mapped the hybrid zone, which allowed us to evaluate whether the two species maintain the same spatial segregation in other watersheds and to identify the areas which need to be sampled to defi ne the zone with greater precision.We also suggest a biogeographical explanation for the origin and present conformation of the hybrid zone, considering the paleoclimatic history and the geographical relief of the study area.
Material and study sites
Between years 2007 and 2011 we collected a variable number of samples per locality from 43 localities which represent almost the entire known distribution ranges of R. atacamensis (25° to 32° S) and R. arunco (32° to 38° S), except for 120 km in the southern range of R. arunco (Fig. 1, Table 1).Sampling was more intense between 31° and 33° S, around the Pupío watershed (32° S) in which hybridization was originally described (Correa et al. 2012;Fig. 1).The individuals used were mainly tadpoles, postmetamorphics and juveniles.We also included a few adults, most of which were sampled by excising a small piece of interdigital membrane and released in the place they were collected.The specific identity of the specimens from north of 31° S and south of 33° S can be unambiguously established because only pure populations of each species have been described beyond these limits and both species can be distinguished without diffi culty by their coloration patterns.Adults of R. atacamensis have sexual dimorphism in background color (whitish in females and yellowish in males) and small reddish spots on the dorsum or on the eyelids, while the background color of R. arunco varies from light grey to dark brown in both sexes (see more details in Correa et al. 2012).Postmetamorphics and juveniles of both species also can be distinguished for their color patterns.The permits for the capture and collection of the animals were provided by the Servicio Agrícola Ganadero (SAG) (resolutions 3085/2000, 2105/2004 and 13/2006).All collected individuals and tissues were deposited in the herpetological collection of the Departamento de Biología Celular y Genética of the Universidad de Chile (DBGUCH).
Obtaining DNA and mitochondrial sequences
We obtained sequences of the mitochondrial control region for 359 individuals from 43 localities (Table 1).The DNA was extracted principally from muscle tissue: from the thigh of adults, the tongue of juveniles and postmetamorphics, and from the tail of larvae.Occasionally we used liver, digit or interdigital membrane; the last only in those adults individuals which were returned to their habitat.DNA was extracted using a modification of the salt method of Jowett (1986).The mitochondrial fragment sequenced included the 3' extreme of the cytochrome b gene and approximately 850 bases of the contiguous extreme of the control region (noncoding).The primers used to amplify this fragment were CytbA-L (5'-GAATYGGRGGWCAACCAGTAGAAGACCC-3') and ControlP-H (5'-GTCCATAGATTCASTTCCGTCAG-3'), designed by Goebel et al. (1999).The PCR protocol is the same used in Correa et al. (2012).
Obtaining AFLP markers
We obtained AFLPs for a representative subset of 205 individuals (among the 359 ones used for obtaining sequences) from 27 localities (Table 1).Details for obtaining (digestion, ligation, pre-selective and selective PCRs steps), genotyping and coding AFLPs are given in Correa et al. (2012).The only difference is that in the present study we used four combinations of selective primers, three of which were used in the earlier study.The combinations of selective primers were MseI-CAC/6FAM-EcoRI-ACT, MseI-CAA/VIC-EcoRI-ACC, MseI-CAT/NED-EcoRI-ACG and MseI-CAC/PET-EcoRI-ACA.
As a simple way of identifying hybrid individuals, a set of diagnostic markers (i.e., markers present in 100 % of the individuals of one species and absent in 100 % of Fig. 1: Location of the 43 localities of Rhinella included in this study, numbered from north to south (see Table 1).Squares are localities from which both mitochondrial sequences and AFLP markers were obtained; only mitochondrial sequences were obtained from the localities with circles.The map at the extreme right is an amplification of the zone where sampling was more intense (31°30' -33° S).Continuous thin lines in this map indicate the limits of the watersheds.Coordinates and altitudes of the 43 localities of Rhinella atacamensis, R. arunco and the hybrid zone included in this study, ordered and numbered from north to south (see map in Fig. 1).The number of individuals of each locality used to obtain AFLP markers and mitochondrial sequences is also indicated.
Locality
Latitude the other) was defi ned using only individuals from the watersheds in which only haplotypes of one species were observed.The diagnostic markers of R. atacamensis were defi ned based on fi ve localities north of the Choapa River watershed (83 specimens) and those of R. arunco from four localities south of the Aconcagua River watershed (16 specimens) (Fig. 1).
Phylogeographic and genetic analyses
Mitochondrial sequences were edited with BioEdit v7.0.7 (Hall 1999).We performed an initial alignment with ClustalX v2.0.12 (Larkin et al. 2007) of the fi rst sequences obtained for the two species; the rest were then added manually.Haplotypes were generated with DnaSP v5.10.01 (Librado & Rozas 2009), including the sites with gaps.The phylogenetic relationships among the haplotypes of the two species were estimated by constructing a haplotype network with the method of median joining using the program Network v4.610 (Bandelt et al. 1999).Default parameters were used for obtaining the network, which was also used to visualize the mitochondrial genetic divergence at the intra-and interspecific levels (as mutational steps) and the relationships among the haplotypes present in the hybrid zone.
The presence of hybrids in Rhinella localities was investigated with NewHybrids v1.1beta3 (Anderson & Thompson 2002).This program uses a Bayesian framework to calculate the posterior probability that each individual of a sample belongs to one or more pre-defined categories of hybrids or to one of the parental species.We specified six categories: pure R. atacamensis; pure R. arunco; fi rst generation (F1) hybrid; second generation (F2) hybrid; backcross between F1 and R. atacamensis; and backcross between F1 and R. arunco.We did not incorporate the species as additional information for this analysis.The length of the Markov Chain Monte Carlo procedure was 1000000 iterations, with the fi rst 100000 discarded as burn-in.We performed various replicas in order to judge the consistency of the results.The location of the hybrid zone was defi ned considering the geographic distribution of the haplotypes of the control region and the individuals classifi ed by Newhybrids.
Variation of the mitochondrial sequences
We obtained an alignment of 921 sites, of which 170 were variable among the two species (12 sites with gaps).We found a total of 121 haplotypes for the two species (including those defi ned by the gaps): 58 corresponding to R. atacamensis and 63 to R. arunco (Fig. 2).Although the number of individuals with haplotypes of R. atacamensis was greater (185), the observed level of intraspecifi c divergence (estimated as mutational steps) was greater in R. arunco (22 steps vs. 19 in R. atacamensis).The intraspecific divergence was much less than that obser ved between the species (75 mutational steps).
In four localities, Puente Pupío, Pupío Medio, El Sobrante and Las Chilcas (Fig. 1 Figure 2 illustrates the relationships among the haplotypes of these localities.The haplotypes are not directly related, but are dispersed within the networks of each of the species.This pattern was more accentuated in the haplotypes of R. atacamensis.The sequences were deposited in GenBank with accession numbers AY818062, AY818063, HQ132482-HQ132670 and KC778198-KC778365.
we found individuals with a mixture of the diagnostic markers of the two species, including the four in which there were mixed haplotypes.In seven other localities between those latitudinal limits we found only individuals with markers of one or the other species (see below and Fig. 3).
The Newhybrids analysis agreed completely with the distribution of the diagnostic markers.As expected by their phenotype (color patterns) and region control haplotypes, all individuals from localities north of 31°30' S and south of 33° S were correctly identifi ed as belonging to their respective species.Also, according to their color patterns, the individuals of the localities of the upper zone of the Choapa River (Canela Alta, Los Perales, Palquial) and the two from Quebrada Seca were classifi ed as R. atacamensis (Fig. 3), whereas the individuals from the lower sectors of the watersheds south of the Pupío watershed (Quilimarí, Los Molles, El Trapiche, Illalolén) were classified as R. arunco.Only in three localities individuals were found which could be considered fi rst or second generation hybrids, while in another fi ve only some individuals classifi ed as backcrosses were present (Fig. 3). 1 and Table 1).The probable location and extension of the hybrid zone is indicated with a hatched area and discontinuous lines on the map, which also has question marks in zones which need to be explored.The localities indicated with squares are those included in the Newhybrids analysis; circles represent the localities with only haplotype data.As in Fig. 1, continuous thin lines in the map indicate the limits of the watersheds.
Delimitation of the hybrid zone
The limits of the hybrid zone were drawn to include all localities in which mitochondrial haplotypes of both species were found and/ or individuals were classified in one of the last four hybrid categories of Newhybrids mentioned above.In its northern part, the zone appears to have a regular form, crossing several watersheds in a NW-NE orientation (Fig. 3).However, in the southern par t, two points located in the Aconcagua River watershed, Quebrada Seca and Las Chilcas, suggest that the geography of the zone is more complex.For this reason, this part of the hybrid zone was drawn with discontinuous lines, one which surrounds Quebrada Seca, which would be an isolated locality of R. atacamensis, and the other extends in Los Andes foothills on the border of the Aconcagua River watershed and includes Las Chilcas (Fig. 3).We also added question marks in the Fig. 3 to indicate those areas which should be sampled to defi ne the limits of the hybrid zone with greater precision.
The zone described in this study extends from the locality of Huentelauquén to Las Chilcas, a latitudinal distance of about 150 km (Fig. 3).Examining the distributions of the two species, R. atacamensis was found at increasingly higher altitudes towards the south, although the locality that until now defi ned the southern limit of its distribution (Quebrada Seca) appears to be isolated from the rest of the populations of Los Andes range.Inversely, R. arunco was found only in the lower part of the watersheds to the north of the Aconcagua River watershed, reaching the coastal strip at the mouth of the Choapa River (Fig. 3).The locality of Las Chilcas is an exception to this pattern, since there R. atacamensis was found at less than 1000 m in the Aconcagua River watershed; in the other two localities of this watershed, Resguardo Los Patos and Quebrada Seca, it was found at 1211 m and 1693 m, respectively.DISCUSSION T h i s s t u d y r e v e a l e d t w o i m p o r t a n t biogeographic aspects of the hybrid zone of R. atacamensis and R. arunco: fi rst, the zone has a relatively reduced extension compared to the distribution of both species, and second, there is a consistent altitudinal segregation between them.The estimated extension of the zone (150 km in latitude) is wide in comparison with the distribution of each species, but is reduced in relation to the combined distribution (about 1450 km length).Additionally, the two species are not homogeneously distributed in the watersheds; thus the hybrid zone occupies a reduced fraction of each and may be represented as a narrow strip which crosses them.The second aspect, the altitudinal segregation, was obser ved by Correa et al. (2012) in one of the watersheds of the zone (Pupío creek).However, the present study shows that it is a consistent pattern in all the area of contact of the two species.In animals, this kind of hybrid zone, where the parental species are consistently separated along elevational gradients, has been scarcely repor ted in the literature (Culumber et al. 2010).
A factor that limits the spatial extension of hybrid zones is the extent to which the species are mixed and are permeable to the gene introgression.If there exist strong reproductive bar riers (prezygotic and/or postzygotics), hybrids are scarse and the hybrid zone is relatively narrow or almost nonexistent, when the surrounding areas where introgression has been detected are not considered (e.g.Colliard et al. 2010, , Taylor et al. 2012, Miraldo et al. 2013).Correa et al. (2012) provided genetic and reproductive evidence showing that the populations of R. atacamensis and R. arunco of the Pupío creek (32° S) conform a possible hybrid swarm, suggesting the absence of reproductive barriers.This study reinforces and spatially extends that initial obser vation, showing also no evidence of mitochondrial and nuclear introgression outside of the defi ned hybrid zone.This apparent absence of reproductive barriers between R. atacamensis and R. arunco (which allow us to dismiss a priori a process of parapatric speciation), the large genetic divergence compared to the intraspecific variation (which is evident in the region control network) and the reduced extension of the hybrid zone detected so far suggest that the hybrid zone originated by secondary contact, which implies that one or both species extended its distribution.The paleoclimatic and orographic information given below allows us to hypothesize that one species expanded its range (R. arunco), displacing the other, which would explain the geographic conformation of this hybrid zone.
In Chile, the changes in the distribution of the flora due to the climatic fluctuations produced by the Pleistocene glaciers and during the Holocene are well known (Villagrán & Hinojosa 1997, Villagrán et al. 1998, Villagrán 2001).Due to the geographic conformation of the countr y, vegetation displacements have mostly been north and south.However, these changes were modified by the geographical relief, thus these displacements have reached dif ferent latitudes depending upon their altitude.Thus the Mediterranean vegetation (mainly sclerophyllous shr ubland) reached a lower latitude in the last glacial maximum, mainly in the coastal zone (Villagrán 1995, 2001, Villagrán & Armesto 2005).Considering that R. arunco is an endemic species of the Mediterranean ecoregion (central Chile), the northward expansions of the Mediterranean vegetation during the glacial cycles of the Pleistocene may explain the presence of this species mainly in the coast between 31°30' and 32°30' S (Fig. 3).If in this area there were glacial refugia for this or other species, they have not been described.Moreover, although the coastal plain is currently narrow, it was wider during the last glacial maximum and could have served as a corridor which allowed the northward expansion across the successive watersheds.These same mechanisms have been suggested to explain the colonization of south-central Chile by a sigmodontine mouse and a continental fi sh (Unmack et al. 2009, Palma et al. 2012).A range expansion of R. arunco through areas of low altitude and slight slopes (i.e., the valleys) would also explain the presence of R. atacamensis at increasingly higher altitudes towards the south, including a high locality, Quebrada Seca, apparently isolated south of 32°30' S (Fig. 3).A possible explanation for the formation of this distributional pattern is a competitive displacement of the lowland populations of R. atacamensis by R. arunco, although if this or other ecological processes are involved need to be evaluated.
There are reports of hybrids zones closely associated to environmental transition areas or ecotones, where the hybridization is frequent, but limited by extrinsic and/or intrinsic factors (e.g.Yanchucov et al. 2006, Hapke et al. 2011, Chavez et al. 2011, Culumber et al. 2012).The altitudinal segregation which defi nes the hybrid zone of R. atacamensis and R. arunco suggests different environmental preferences, but this does not agree with the fact that both species occupy both lower and higher zones of the watersheds, in the areas of their distributions where only one species occurs.Moreover, there are no abrupt changes in environmental parameters along the courses of the rivers in which the hybridization occurs, or at least they are not apparent.In a wider climatic context, the zone is located in the xeric-oceanic Mediteranean bioclimate, which borders on the north with the oceanic-desert Mediteranean bioclimate and on the south with the seasonal rain-oceanic.The transition between these bioclimates is gradual and is closely associated with a nor th-south precipitation gradient (Luebert & Pliscoff 2006).Also this zone is within the area in which the transverse valleys (which are more or less perpendicular to Los Andes range) disappear and the Coastal range begins to appear; these are low, older mountains found from 33° to 41° S and are parallel to Los Andes range.In both cases, the transitions are gradual and do not coincide with the location of the hybrid zone, so we discount that the climatic conditions and/or geographic relief might be more relevant than a possible historic event (expansion of R. arunco) to explain its origin and current conformation.
Finally, we would like to consider briefl y the consequences of the existence of a hybrid zone of R. atacamensis and R. arunco for the taxonomy and conser vation of Rhinella in nor th-central Chile.The great majority of populations between 25° and 38° S may be assigned unequivocally to one or other species, by phenotype, mitochondrial and nuclear markers (diagnostic AFLP markers).However, populations of the hybrid zone may be considered as a fusion of both species (Fig. 3).The geographic location of the zone implies the presence of pure populations of both species in the main watersheds between 31°30' and 33° S, but they are separated by mixed populations whose exact extension is diffi cult to determine without estimations of gene introgression in these systems.Moreover, Correa et al. (2012) and the present study demonstrate that the majority of individuals from hybrid populations have dif ferent propor tions of the nuclear markers of both species, thus they cannot be assigned to one or another.Therefore, we suggest expanding the taxonomic defi nition of each species, including all the respective pure populations and those composed of a mixture of both.Thus, now the distribution range of R. atacamensis would be defi ned between 25° and 32° 52' S (Las Chilcas), whereas that of R. arunco would be defi ned between 31° 35' (Huentelauquén) and 38° S. In a conservation context, Allendorf et al. (2001) suggest this type of natural hybrid zone, where apparently the reproductive success of hybrids is similar to that of parental species (Cor rea et al. 2012), constitutes a eligible conservation unit, although in this case their exact geographic limits remain to be defi ned.These proposals would allow us to formalize the discover y of this hybrid zone, which also adds an interesting evolutionar y dimension to the study of the biogeography of the amphibians of Chile.
Fig. 2 :
Fig. 2: Haplotype network of the mitochondrial control region of Rhinella atacamensis and R. arunco, including all observed haplotypes (121).The amplifi cations of the networks for each species are shown on the right; the haplotypes of both species found mixed in four localities of the hybrid zone are indicated in gray: Pupío Medio (PM), Puente Pupío (PP), El Sobrante (ES) and Las Chilcas (LCh).The size of the circles is proportional to the sampling frequency of each haplotype and the length of connecting lines is proportional to the mutational steps that separate them.
Fig. 3 :
Fig. 3: Result from Newhybrids (left) and map of the hybrid zone of Rhinella atacamensis and R. arunco.Each colored bar represents one individual and the extension of the color indicates the probability of belonging to one of the six categories specifi ed below the bars according to the AFLPs analysis performed in Newhybrids.Numbers at the right of the bars indicate the localities of the individuals (locality numbers are the same as Figure1and Table1).The probable location and extension of the hybrid zone is indicated with a hatched area and discontinuous lines on the map, which also has question marks in zones which need to be explored.The localities indicated with squares are those included in the Newhybrids analysis; circles represent the localities with only haplotype data.As in Fig.1, continuous thin lines in the map indicate the limits of the watersheds.
TABLE 1 .
Continuationdetected mixtures of haplotypes of both species.
|
v3-fos-license
|
2018-07-19T14:00:03.812Z
|
2018-03-25T00:00:00.000
|
51790786
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.pub.iapchem.org/ojs/index.php/admet/article/download/470/pdf",
"pdf_hash": "24d5df408588a6b92d755b7c2efe8583b929891a",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:719",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "24d5df408588a6b92d755b7c2efe8583b929891a",
"year": 2018
}
|
pes2o/s2orc
|
In silico ADME in drug design – enhancing the impact
Each year the pharmaceutical industry makes thousands of compounds, many of which do not meet the desired efficacy or pharmacokinetic properties, describing the absorption, distribution, metabolism and excretion (ADME) behavior. Parameters such as lipophilicity, solubility and metabolic stability can be measured in high throughput in vitro assays. However, a compound needs to be synthesized in order to be tested. In silico models for these endpoints exist, although with varying quality. Such models can be used before synthesis and, together with a potency estimation, influence the decision to make a compound. In practice, it appears that often only one or two predicted properties are considered prior to synthesis, usually including a prediction of lipophilicity. While it is important to use all information when deciding which compound to make, it is somewhat challenging to combine multiple predictions unambiguously. This work investigates the possibility of combining in silico ADME predictions to define the minimum required potency for a specified human dose with sufficient confidence. Using a set of drug discovery compounds, in silico predictions were utilized to compare the relative ranking based on minimum potency calculation with the outcomes from the selection of lead compounds. The approach was also tested on a set of marketed drugs and the influence of the input parameters investigated.
Introduction
Drug design is a multi-parameter optimization process, with pharmacokinetic properties describing absorption, distribution, metabolism and excretion (ADME) of a compound being important to consider early on. These parameters describe the pharmacokinetics of a compound and are key to determining the dose required for efficacy. Parameters such as lipophilicity, solubility, and metabolic stability can be measured in a high throughput manner in vitro and are, thus, often used as early ADME screens. During lead identification and lead optimization phases a substantial number of compounds is synthesized and characterized in such screening assays for selection towards additional, more costly in vitro and in vivo experiments to be able to finally select a single candidate drug. However, a compound needs to be physically available to be subjected to such assays. Thus, drug industry generates thousands of compounds per month out of which many do not show desirable ADME properties.
In silico models for such ADME endpoints, on the other hand, have been available for a long time, though with varying quality and usability [1][2][3][4][5][6]. Ideally, such models would be used before synthesis and, together with any potency estimation for the specific case, influence the decision to make a compound. Consequent usage of such in silico predictions has the potential to considerably reduce the number of synthesized compounds with inadequate ADME properties. In practice, it seems that most often only a few predicted properties are really considered before synthesis, usually including lipophilicity as important parameter. While it is understood that all available knowledge should be used to select which compound to make, it is not easy to define how the outcome of various predictions can be combined unambiguously. Lately, the use of multi-parameter optimization and scoring tools has been proposed and shown to be of value [7,8]. However, the definition of the scoring functions can be difficult and may be a bit arbitrary. As physiologically meaningful scoring function the predicted dose to man (D2M) has been suggested to utilize as early as possible. Several reports validated early D2M predictions both from preclinical in vivo data [9] as well as from in vitro data combined with in silico predictions [10]. However, using in silico predictions only was not found to be reliable enough, since both potency and pharmacokinetic properties would need to be accurately predicted.
Here we concentrate on the pharmacokinetic properties only and suggest to use in silico ADME models to predict what potency would be required for a specific compound to enable coverage during the whole dosing interval. By reverting the D2M equation and setting the target dose to, for example, 100 mg once daily, the achievable plasma concentration can be calculated from in silico predicted parameters and the minimum required potency ("threshold pIC50") defined as ranking score for virtual compounds. We show how such a ranking score could be used in the discovery setting on the example of a set of 27 compounds leading towards a candidate structure [11]. We also test the concept on a set of known drugs with information about dosing and human pharmacokinetics [9] and compare the results from purely in silico predictions to those from in vitro data and from human in vivo data. We also investigate how the in silico derived parameters influence the outcome of the equations, highlighting concepts to help drug discovery to improve pharmacokinetic properties during design.
Inverse dose-to-man (D2M) prediction
Equations used are standard pharmacokinetic equations [9,12,13] based on a one compartment model considering immediate absorption to estimate the average (C ave , Eq. 1), minimum (C min , Eq. 2) and maximum (C max , Eq. 3) concentration of a drug at steady state. (1) where F is bioavailability (Eq. 4), D po is the daily oral dose, Cl is the plasma clearance estimated using the well-stirred model (Eq. 5), is the dosing interval, typically set to 24 h, V is the volume of distribution (predicted) and k e is the elimination constant (Eq. 6).
where F abs (fraction absorbed) is estimated from permeability (Eq. 7), F g (fraction escaping gut metabolism) is set to 1 and F h (fraction escaping hepatic metabolism) is estimated from Cl (Eq. 8). (5) with Cl b being blood clearance, Q h the liver blood flow (20 ml/min/kg), CLint in vivo the in vivo intrinsic clearance estimated from in vitro CLint (Eq. 9), and fu b the fraction unbound in blood.
where R b is the blood plasma ratio and set to 1 for neutrals and bases, whereas set to 0.55 for acids and zwitterions.
where fu is fraction unbound in plasma (predicted).
where CF is an empirical correction factor adopted for the simplified regression offset approach (CF=3, see below), and CLint sc is the scaled intrinsic clearance from the human hepatocyte incubation (Eq. 9a).
where CLint HH is the intrinsic clearance in human hepatocytes (predicted), SF HH are the scaling factors (120 million cells per g liver x 1680 g liver weight / 70 kg body weight = 2.88) and fu inc is the fraction unbound in the hepatocyte incubation (predicted).
The minimum potency value, e.g., "threshold pIC50" based on C min , is then calculated as the negative logarithm of the predicted concentration considering the specific fold coverage, typically set to 1 or 3 (Eq. 10).
Simplified regression offset approach -correction factor (CF) The regression offset approach [15] was shown to improve in vitro to in vivo scaling of clearance, especially when experiments where compared from different laboratories (sites) and using different batches of hepatocytes or microsomes. Extensive in-house data analysis (data not shown) indicated that overall the regression slope can be set to 1 and the offset to about 0.5, giving an equivalent level of correct predictions in three species, human, rat and dog. Thus, in vivo CLint can be estimated by multiplying the scaled CLint with a correction factor of 3. Here we adopt this procedure to estimate in vivo CLint directly from in silico predictions.
In silico models
Five in silico models were used as input for the above calculations. Human volume of distribution at steady state, intrinsic Caco2 permeability, human plasma protein binding, human hepatocyte intrinsic clearance, and fraction unbound in the hepatocyte incubation. All but the volume of distribution model used AstraZeneca in-house experimental data. All models are available within AstraZeneca.
Human volume of distribution (V)
Human volume of distribution data for about 700 compounds were collated from literature [16][17][18] and randomly split into a training and test set (n = 544 and 144, respectively). Both acids, bases, neutral and zwitterionic compounds are included in both datasets. It is a random forest model [19] using physicochemical descriptors [20] including ACD log P and log D [21] and clog P [22]. The experimental data is modelled in its logarithmic form. The prediction explains ~70 % of the variance of the test set, and has an error in prediction of 0.4 log units.
Intrinsic Caco2 permeability (P app )
The model is built on in-house data of intrinsic Caco2 permeability, apparent permeability measured in Caco2 cells in presence of a defined transport inhibitor cocktail as described earlier [14], and updated about every sixth month. The present training set consists of >4,000 data points. About 100 simple physicochemical descriptors [20] including ACD log P and log D [21] and clog P [22] are calculated from the compound structures. The data is modelled as log P app using random forest regression [19] as implemented in scikit-learn [23]. The latest temporal test set comprises about 300 compounds and shows an R 2 of 0.4 and a root mean squared error of predictions (RMSEP) of 0.6 (log scale), with about 60 % of the compounds being within 3-fold of the experimental value. The model can be used to distinguish between high and low permeability compounds with classification accuracy above 0.8.
Human plasma protein binding as fraction unbound (fu)
The model is built on in-house data of human plasma protein binding generated using equilibrium dialysis in high throughput assays as described earlier [24][25][26]. It is updated monthly. The present training set consists of almost 90,000 compounds. The data is modelled as log K (= log[(fraction unbound)/(fraction bound)]). The modelling procedure utilizes support vector machines [27] with a linear kernel, signature descriptors [28] and the conformal prediction framework [29] as implemented in cpsign from GenettaSoft [30]. The latest temporal test set comprising 750 compounds shows an R 2 of 0.7 and an RMSEP of 0.4, with about 80 % of the compounds being within 3-fold of the experimental value of fu.
Human hepatocyte intrinsic clearance (CLint HH )
The model is built on in-house data of human hepatocyte intrinsic clearance generated in high throughput assays using incubations of either cryopreserved or fresh human hepatocytes at 37 °C for up to 120 min as described earlier [26,31,32]. It is updated about every sixth month. The present training set contains more than 11,000 compounds and the data is modelled as log CLint. The modelling procedure utilizes support vector machines [27] with a linear kernel, signature descriptors [28] and the conformal prediction framework [29] as implemented in cpsign from GenettaSoft [30]. The latest temporal test set comprising almost 200 compounds shows an R 2 of 0.2 and an RMSEP of 0.4, with ~75 % of the compounds being within 3-fold of the experimental CLint value.
Fraction unbound in the hepatocyte incubation (fu inc )
The model is built on in-house binding data measured in cryopreserved rat hepatocyte incubations as described earlier [33]. The model is updated approximately every sixth month and the present data set contains about 1,700 compounds. The data is modelled as log K (= log[(fraction unbound)/(fraction doi: 10.5599/admet.6.1. 470 19 bound)]). The modelling procedure utilizes support vector machines [27] with a radial basis function kernel, signature descriptors [28] and the conformal prediction framework [29] as implemented in cpsign from GenettaSoft [30]. The latest temporal test set comprising about 200 compounds shows an R 2 of 0.5 and an RMSEP of 0.5, with ~75 % of the compounds being within 3-fold of the experimental fu inc value.
Global Sensitivity Analysis
Global sensitivity analysis was performed by a quasi-Monte Carlo method using the Fourier amplitude sensitivity test (FAST) and Sobol' sensitivity, which are implemented in the Global sensitivity analysis toolbox (GSAT) [34] in MATLAB [35] using 20,000 sample points.
Local Sensitivity Analysis
Local sensitivity analysis was performed by taking Jacobian matrices of the model with respect to the model parameters and other parameter points [36]. These matrices contain actual parameter values which can be examined, or plotted graphically against other parameters, to show which have the greatest effect at each point.
where = ( 1 , … , ) is a vector of parameter points to be examined, = ( 1 , … , ) is a vector of other parameters which are being examined, and x is a scalar output from the model. This was performed in MATLAB [35] using automatic differentiation of the parameters via the myAD toolkit [37].
Data sets
Two data sets were used for the present analysis. Data set 1 was a set of 27 drug discovery compounds exemplifying important compounds leading towards candidate selection [11]. Data set 2 was a set of 21 marketed drugs, for which potency data, dosing information and human pharmacokinetic data had been collated and used to show feasibility of early D2M predictions [9].
Calculation of required minimum potency (threshold pIC50) for a set of drug discovery compounds
As proof of concept a set of drug discovery molecules leading towards a candidate structure [11] was evaluated. Intrinsic clearance in human hepatocytes, fraction unbound in the incubation, fraction unbound in plasma, volume of distribution and intrinsic Caco2 permeability were predicted for all compounds using the present in-house models (see Table 1) and minimum and maximum plasma concentrations for a once daily oral dose of 100 mg calculated using equations 2 and 3. The project aimed at minimum plasma concentrations with 3-fold coverage over potency measured as pIC50 in a whole blood assay. Thus, threshold pIC50 values were calculated from C min,total /3 (see Table 2). Blood plasma ratios for these mainly neutral compounds were considered to be 1. Protein binding was considered in the scaling approach, but threshold pIC50s were based on resulting total plasma concentration to match the potency measurement.
The data indicated that some of the compounds had very short half-life and thus a once daily dose resulted in a high threshold pIC50 to cover potency over 24 hours and a high C max /C min ratio. Using a 10 times higher dose would reduce the threshold pIC50 by one unit, but show the same C max /C min ratio, since the applied model assumes linear kinetics. Considering only compounds with a threshold pIC50 below 9 and a C max /C min ratio below 100 about 12 compounds remained. This set of compounds included compound 15b, the compound finally selected, and three of the remaining four compounds of higher interest for which rat in vivo results were reported. Compound 22, which according to the present analysis was least favourable, was actually found in the rat study to have short half-life and, thus, not suitable to progress.
In summary, it seems that for this data set the threshold pIC50 together with the predicted C max /C min ratio could be used for ranking the compounds and correctly identify those with inferior pharmacokinetics. The results also highlight the danger in deprioritizing compounds just by applying a cut-off for, for example, metabolic stability. Here, three compounds had (predicted) CLint values higher than 20 µl/min/10 6 cells, compounds 12, 14a and 19. For 12, a reference compound, threshold pIC50 was one of the lowest in the whole set based on a predicted blood clearance below 5 ml/min/kg, whereas 14a showed an intermediate threshold pIC50 (8.2) and blood clearance just above 5 ml/min/kg. Only for compound 19 the threshold pIC50 was estimated to be on the high end within the present set, even though the blood clearance was again predicted as just above 5 ml/min/kg. 22, the compound with the highest threshold pIC50 has almost double the blood clearance despite a lower hepatocyte CLint (~ 8 µl/min/10 6 cells). Rank order differences between in vitro (or in silico) CLint and predicted in vivo clearance can be explained by the different binding properties: High binding in the incubation, i.e., low fu inc values, will potentiate the metabolic instability measured in the incubation, whereas high binding in plasma, i.e., low fu, can to some extent mitigate low metabolic stability seen in vitro. Furthermore, minimum concentrations mainly depend on a compound's half-life, determined by clearance and volume of distribution. Thus, volume of distribution is another important factor that again may change the rank order.
Calculation of threshold potency for a set of known drugs
As second test, we investigated the approach for a set of marketed drugs where information on dosing and human pharmacokinetic data had been collated by McGinnity et al. [9]. This dataset was previously used to evaluate the validity of early dose to man predictions [9,10]. Table 3 shows in silico ADME parameters for the 21 drugs predicted with the present models at AstraZeneca. To take into account that these drugs are not necessarily given once daily only, we adopted the dosing regimen suggestion by McGinnity et al. [9] when calculating the minimum required potency, threshold pX (see Table 4). The resulting threshold pX values are below 10 for all but four of the compounds, thus indicating that the approach is able to estimate pharmacokinetic behaviour to some extent. Additionally, the C max /C min ratio was estimated as below 100 for most of the compounds, even though it showed values above 1000 for three of them (ritonavir, bisoprolol and diclofenac). The availability of both in vitro [10] and in vivo [9] data for the compounds prompted us to investigate, to what extent the usage of experimental data would change the results (see Table 5 and Figure 1). [9], based on their predicted t 1/2 : t 1/2 >8h 24h; t 1/2 4-8 h 12h; t 1/2 2-4h 8h; t 1/2 1-2 h 6h; t 1/2 < 1h 4h; b C min,total : predicted minimum total concentration at steady state; c C min,free : predicted minimum concentration at steady state corrected for plasma protein binding; d C max,total : predicted maximum total concentration at steady state; e threshold pX from C min,free : required minimum pX estimated from free C min for coverage over dosing interval; f C max /C min : predicted ratio between maximum and minimum total concentration at steady state. Table 5. Threshold pX based on a daily dose of 100 mg (1.4 mg/kg), given as 1-6 doses per day, for 21 known drugs [9], derived from in silico data (as above), in vitro data [9,10,14], and actual human in vivo data [9] Drug name Threshold pX from in silico a Table 4; b threshold pX calculated from in vitro values using the same procedure as in Table 4 (CLint HH and fu values from ref [10], P app values from [9] and [14] as specified, and in-house values for fu inc ; c threshold pX calculated from in vivo values [9] using the same procedure as in Table 4; d P app value from ref [9]; e P app value from ref [14]; f fu inc value from in-house in silico model; g no value (not enough experimental data available for calculation); h in-house value for CLint HH ; i fu value from ref [17]; j P app from in-house in silico model; k in-house value for P app .
Most threshold pX values can be found within the range of 7-9, with a couple of values, especially in vivo derived, below 7 and others, most often in silico derived, above 9. Only one of the compounds for which human in vivo PK data was available showed an in vivo derived threshold pX values above 10, whereas all in vitro derived threshold pX values were actually below 10. The one compound with a higher in vivo data derived value, diclofenac, was earlier recognized as not being properly described by the one compartment PK model employed here [9]. For about a third of the compounds all three values are very close within about one log unit, e.g., carvedilol or nitrendipine. Other compounds have a somewhat higher spread, most often the in vitro value closer to the in vivo derived value, e.g., acebutolol or betaxolol. For a few compounds the in silico derived value is clearly differing from the other two, e.g., bisoprolol, diazepam and ritonavir. To better understand why we find the in silico predictions differ, we investigated how well the in silico and in vitro data can predict human in vivo properties (see Figures 2-4). Note that cyclosporine and desloratidine were excluded from this analysis, since there was not enough in vitro data for the former and no in vivo data for the latter compound. or in vivo (black triangles) data as described in text. Figure 2 shows that in silico data tends to overpredict human clearance. The two compounds on the left, diazepam and ritonavir, are obvious extremes, and their higher estimate of the threshold potency value are most likely related to the clearance misprediction. Note that clearance predictions from in vitro are clearly closer to the line of unity. However, we find two compounds, diclofenac and metoprolol, to be under predicted by more than 3-fold using the present scaling approach. This can also be seen in a more optimistic threshold pX value when compared to in vivo (see Figure 1). Overall, about two thirds of the compounds have in silico predicted clearance values within 3-fold from the experimental in vivo values, and all but two compounds are within 3-fold utilizing in vitro values. Also the underprediction of half-life from in silico data is mostly due to the clearance prediction, while half-life prediction from in vitro, using the same in silico derived value for distribution volume, is clearly enhanced.
Human fraction absorbed seems to be the most difficult parameter to predict correctly (see Figure 4). While both in silico and in vitro predictions identify the two compounds with F abs below 0.4, compounds with F abs between 0.4 and 0.6 were only recognized in two or three cases from in silico and in vitro data, respectively. However, this failure is not necessarily a concern in the present case, since the relationship between F abs and plasma concentrations is assumed to be linear and a 2-fold change in fraction absorbed will lead to a 2-fold change in plasma concentration and thereby only to a 0.3 log units change for the threshold pX. Actually, Page [10] suggested, that estimation of F abs as either 1 (acids) or 0.5 (all other ion classes) is sufficiently accurate for early D2M predictions.
Influence of basic parameters -intrinsic clearance, fraction unbound in the hepatocyte incubation, plasma protein binding and volume of distribution
Using pharmacokinetic equations assuming a one compartment model both dose and fraction absorbed are linearly related to the derived plasma concentrations and it is easily understood how either of these parameters will influence the threshold pIC50 calculation. The remaining four input parameters, CLint HH , fu inc , fu, and V, on the other hand are more intricately interlinked, especially since the first three also are included in the scaling approach converting in vitro (in silico) CLint to in vivo clearance using the well-stirred liver model (equations 5 and 9). In order to better understand how these parameters influence the outcome, threshold pIC50 values for hypothetical compounds with hepatocyte CLint values varying from 1 to 200 l/min/10 6 cells, values for fraction unbound in the incubation varying from 0.01 to 0.7, values for protein binding (fu) varying from 0.001 to 0.3, and volume of distribution varying from 0.2 to 3 L/kg were calculated (see Figure 5).
As expected, lower intrinsic clearance as measured in hepatocytes leads to a lower threshold pIC50 estimate. However, the sensitivity of this correlation varies both with the fu inc /fu b ratio and the volume of distribution: lower volume of distribution leads to higher sensitivity towards intrinsic clearance. Additionally, higher fu inc /fu b ratios lead to lower sensitivity of the intrinsic clearance as well as of volume of distribution (see Figure 5, panel in upper right corner). Lower V leads to higher threshold pIC50 values and higher fu inc /fu b ratios essentially lead to lower threshold pIC50s, at least as long as potency in blood assays, i.e., defined from total concentrations, is considered. The threshold pIC50 shifts up by three log units in the uppermost panel row (fu = 0.001) when free plasma concentrations are taken into account, whereas the threshold pIC50 in the lower panel row (fu = 0.3) shifts only by about 0.5 log units.
Sensitivity analysis
Sensitivity analysis is a useful tool for determining which parameters are important. Global sensitivity analysis is used to explore the entire parameter space, considering physiologically relevant values to determine the relative importance of each of them. Here it was shown that volume of distribution is the most influential parameter by far (see Figure 6).
Local sensitivity analysis, on the other hand, can define how sensitive the calculation is to a parameter, when another parameter is scanned over its defined range. Here, we checked the influence of the remaining parameters for different values of volume of distribution (see Figure 7). CLint HH and fu have overlapping influence, considering a 10 % change, whereas fu inc is exacty opposite. Since in vivo CLint is multiplied by fu, when calculating Cl b using the well stirred model (Eq. 5) and is calculated from CLint HH divided by fu inc (eqs 9 and 9a) this is not really surprising. Note that the influence of the parameters is highest at low volume of distribution values, decreases quickly and does not change further as soon as V reaches a value of 2 L/kg. Considering typical volume ranges for acids, bases and neutrals with median values of 0.2 L/kg, 2.9 L/kg and 1.3 L/kg, respectively [10,17], as indicated in Figure 7, it is clear that especially for acids a small change in either hepatocyte CLint or binding properties can make a big difference for the prediction of threshold pIC50 determined from C min .
Influence of in silico model quality
The quality of the in silico models needs to be considered for the present analysis. Here we use AstraZeneca's in-house models for the five parameters, two of which have rather extensive training sets with more than 10,000 data points each, gathered over a long period of time (>10 years), whereas two, fu inc and P app , have intermediate training set size, and the fifth, volume of distribution, is based on literature data and uses only about 500 compounds. For the four models employing in-house data, about 70-75 % of the compounds within a temporal test set, i.e., compounds that have not been available when the model was built, were found to be within 3-fold of the experimental value. These results are rather encouraging when considering that the experimental variability in general is assumed to be about 2-fold [38]. The models are being regularly updated, as it was shown earlier that continuous updates of models enables new chemistry to be well represented and, thus, likely better predicted by the models [39]. It should be emphasized that the predictivity of the models needs to be investigated for the drug discovery project or chemical series in question when this approach is to be applied. Note that, as was shown in Figure 2, clearance prediction are reasonably accurate from in silico for most compounds in data set 2 and can be directly verified as soon as in vitro data is available for a compound.
The remaining parameter, human distribution volume, is less easily available. The QSAR model used here is based on human literature data and cannot be updated as straightforwardly as models using data from in-house screens. Also, external data is not necessarily relevant to newest in-house chemistry, and model outcome can usually only be verified at later stages since in vivo data is required. Nevertheless, the predictivity was considered good, with an error of prediction to be 0.4 log units (2.5-fold), and scrutinizing literature or databases for new data [40][41][42] in fairly regular intervals should ensure continued high quality of the model.
Conclusions
The present study suggests the usage of in silico ADME predictions to estimate which plasma concentration a new chemical structure may achieve when given orally using a defined dose and dosing regimen. From the plasma concentration a minimum required potency can be deduced, here referred to as threshold potency (or threshold pIC50). While it was postulated, that early dose predictions from in silico was not yet possible [10], the idea here is to utilise predictions only to consider a compound's summarized pharmacokinetic properties for ranking compounds within a series early on in the design process. Combining the in silico predictions to a physiologically meaningful score, reduces the risk of deselecting a compound just because one of the parameters is outside an arbitrarily chosen limit.
Using the approach for a set of drug discovery compounds, it was shown that the threshold pIC50 was able to define which compounds had a higher or lower chance of success, with the finally selected compound in the former set. It was assumed that the approach is most useful in lead optimisation stage, when the requirements for potency coverage are known.
Additionally, it was shown that the threshold potency values calculated for a set of marketed drugs were in a reasonable range for most of the compounds, when appropriate dosing regimens were considered for each. These results were also compared with the outcome, when experimental in vitro or human in vivo data were considered as input for the same procedure. In silico outcome was in many cases similar to both in vitro and in vivo, and when there were bigger differences usually the in vitro was closer to the in vivo outcome. Thus, in silico prediction can be easily verified by in vitro experiments as soon as a compound is made.
|
v3-fos-license
|
2019-02-19T14:07:56.535Z
|
2018-01-01T00:00:00.000
|
69330847
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/91/matecconf_eitce2018_01051.pdf",
"pdf_hash": "8b8eedd6e73cb8fc8b15a07cb3a9f20a92459368",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:720",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"sha1": "d0d318f07d56d89d095c74d0aba84ab9fcb1d42a",
"year": 2018
}
|
pes2o/s2orc
|
The Design and Implementation of Computer Hardware Assembling Virtual Laboratory in the VR Environment
In solving the problems of slowly updated laboratory equipment, heavy hardware wastage and potential danger in the course of traditional computer hardware assembling experiment, this article proposes a design and concrete implementation of virtual laboratory for computer hardware assembling based on immersive VR environment. Adopt 3Ds Max and Unity3d software to create 3D models and build the scene, and Mojing SDK as the VR display effect and the tool of interactive interface, the article achieved the design and implementation of virtual laboratory application in mobile devices. Current virtual laboratories based on Virtools can only run in Windows environment, and this design overcomes such limitation. In addition, with rich scenes and tutorial, this design combines the headmounted display and somatosensory controller to greatly promote the immersion and interactivity, thus enhancing the interests of students. This is a new trail to improve the traditional experimental teaching effect.
Introduction
Virtual reality technology, also known as soul technology, is based on computer technology and uses relevant science and technology to form a virtual environment similar to the surroundings. In this virtual environment, learners can do whatever they want to do in the real world [1]. Virtual reality has three characters: interaction, immersion and imagination. According to the degree of immersion and interaction, virtual reality can be generally divided into semi-immersive desktop virtual reality system and immersive virtual reality system [2]. The semi-immersive desktop virtual reality system mainly relies on tools like 3D modelling provided by computer to create a screenbased virtual environment. Users can realize humancomputer interaction through input tools such as mouse and keyboard. The advantages of this system lie in its low cost and high penetration rate [3]. The disadvantage is the poor immersion effect caused by the influence of the operating environment. The immersive virtual reality system refers to completely closing the visual, auditory and tactile sense by using a head-mounted display, data gloves or other handheld devices and thereby achieving a good immersive experience.
With the development of virtual reality technology, especially in the first year of VR, well-known companies such as Google, Facebook, HTC, Xiaomi, and Storm have directly or indirectly launched their VR products and plans. Virtual reality has evolved from computer simulation to immersive virtual reality, undoubtedly providing more technical support for the application of virtual reality in education. Its application in education is realized by students learning in a three-dimensional virtual environment presented by virtual reality related hardware and software, whose application value is mainly reflected in four aspects:firstly, creating a more realistic learning scenario for learners and increases their learning experience through multisensory interaction [4]; secondly, enhancing learners' motivation and participation [5]; thirdly, allowing learners to learn independently; lastly, bridging the gap between theory and practice. In recent years, virtual reality technology has been widely used in various teaching fields such as physics, chemistry, biology, history, medicine, agriculture, dance and aerospace, playing a significant role in assisting and promoting students' learning as well as providing a new way to improve the traditional teaching models.
At present, the application of virtual reality technology in education has achieved remarkable results. However, compared with foreign technology and application level, domestic practice research is still on its initial stage [6]. In Chinese education domain, most theoretical and practical studies are based on desktop virtual reality system (Desktop-VR) with its interactivity and immersion to be improved. Taking computer hardware assembly course as an example, many scholars have conducted related researches, such as Yan Lina of Sichuan Normal University, Ge Qiaoyan of Zhejiang University of Technology and Li Qiang of Bohai University, proposing computer hardware assembly virtual labs based on 3Ds Max and Virtools technology. On the whole, however, due to the limitations of development technology and hardware, the virtual labs only support running in the Windows environment and require learners to interact with their objects through keyboard and mouse during the learning process, lacking immersion and interactivity and leading to insufficient interest in learning, even though they meet learners' basic learning needs. This study adopts Unity3d technology as the development platform, Mojing SDK as the development tool for VR effects, smart phone as the operating environment for virtual labs and storm mirror and daydream sensor handle as virtual reality display and interactive devices, building a computer hardware assembly virtual lab in a immersive VR environment to solve the previous problems that computer hardware assembly virtual laboratory is not highly immersive and cannot create real teaching situations which lead to low learning efficiency.
Teaching content design
Computer Hardware Assembly is one of the compulsory courses for students majoring in computer application in secondary vocational and technical schools, playing a vital role in the whole process of discipline development. Through the study of this course, learners can have a better understanding of the basic knowledge of computer hardware and improve their practical ability to assemble computers. Therefore, the computer hardware assembly virtual laboratory is based on the teaching objectives of the computer hardware assembly of the secondary vocational school. Different presentation methods are utilized according to the difficulty level of knowledge points [7]. When designing the teaching content of hardware's names, functions and parameters, the three-dimensional model and text are used to increase the perceptual intuition of teaching given that the teaching content of this part is mainly to enable learners to correctly understand the names, functions and parameters of the computer hardware. In terms of simulative assembly, the video demonstration is used at first to help students make clear of the points for attention in the process of computer hardware assembly and master how to properly assemble a computer, considering that this is the core part of the entire virtual laboratory. After watching the demonstration video, learners are asked to start the installation situation using somatosensory handle with an aim to improve their practical ability. When it comes to designing the teaching content of common hardware faults, students are supposed to be familiar with them and corresponding solutions to enhance their problem-solving ability. Therefore, the teaching content is presented in the form of text.
Functional module design
Based on the above analysis of the teaching contents, the functional module of the virtual laboratory is divided into three ones: basic learning module, virtual experiment module and experimental feedback module. The specific function introduction is shown as follows.
Basic learning module
The basic learning module can be divided into two submodules: hardware model display and hardware function introduction. The main function of the hardware model display module is to enable learners to roam in the virtual world from the first-person perspective through the somatosensory handle interaction device and observe the basic structure of each hardware from any angle, greatly enhancing learning's intuitiveness and making up for the teaching problem that the students can't observe the computer hardware up close and for a long time due to the lack of hardware equipment in the traditional experimental teaching. The hardware function introduction module introduces the basic functions of hardware and the current mainstream parameters. Students can not only know the basic functions and parameters of hardware, but also able to apply the knowledge in daily lives after learning, so that they can understand the computer performance based on the configuration list when they purchase a computer in the future [8]. When certain hardware is updated over time, administrator can adjust the teaching content in the system as needed to ensure its cutting edge.
Virtual experiment module
This module is the core part of the entire virtual lab, consisting of two sub-modules: assembly video demo and simulative assembly. The teaching content of the assembly video demonstration is to produce a teaching video of computer hardware assembly based on learning theory and teaching theory after analysing the textbook Computer Hardware and Assembly Maintenance and learner characteristics majoring in computer application of secondary vocational schools, helping them understand the correct installation steps and precautions so as to improve the scientificity and standardization of the simulative assembly process. Learners can also watch the video after simulative assembly as needed and find out the wrong operations during the assembly process. This part revoles around learners' simulating to assemble a computer and creating a virtual learning environment through virtual reality technology to get a sense of presence. Learners operate the somatosensory handle during the learning process and receive corresponding feedbacks. If correct, the selected computer hardware will move to the corresponding position of the motherboard and the system will give feedback on the correct installation; if wrong, the system will present corresponding prompt message to help students complete the experimental task smoothly. The experimental results are significantly better than some existing computer hardware assembly virtual labs.
Experimental feedback module
This module is mainly composed of two sub-modules: common fault cases and experimental tests. Common fault case module is designed to introduce high-frequency faults of the computer hardware, such as loss internal storage, and the corresponding solutions, assisting students to development a solution in their minds and deal with it alone when they encounter a hardware failure as well as improve their practical ability to solve problems. In terms of the experimental test module, a test is required after the studying the basic learning module and the virtual experiment module. The content of the exercises is based on the teaching objectives and content, going from the easy to the difficult and complicated, largely increasing students' self-confidence and enhance their learning effect. They can evaluate their own learning results after the test and fill in the gaps of knowledge.
The construction of development environment
This virtual lab uses Unity3d as the main development environment in software, Mojing SDK as a development tool for virtual reality and 3Ds MAX as a development tool for 3D models.
First, create a new project file in Unity3d software and a new Assets folder in the project directory used to store Mojing SDK, 3D models, image, audios and videos. In this way, we can dynamically watch scenes' finish effect by calling the memory loading mechanism configured by Unity3d at any time during the development process with an aim to improve the loading speed and running performance of scenes.
Then, import the Mojing SDK into the project file through Import Package under the Assets menu bar, delete the original Main Camera in the scene, create a new MojingMain as the main camera in the virtual lab, and then utilize the Import New Assets under the Assets menu to import all the resources needed into the project.
Finally, for each function page, we create multiple scenes, skyboxes, and ground systems to load its teaching content. This is the core content of the entire virtual lab. Command SceneManager. Load Scene to realize the switches between scenes.
The realization of the main functional modules in the virtual laboratory
According to the design of above functional modules, this essay divides the virtual lab into various learning modules so that learners can learn the teaching content in the corresponding modules according to their actual needs. In the development process of the virtual lab, each experimental function module is packaged in a separate scenario and linked by the hierarchical script so that learners can selectively study as required. The specific implementation of each scenario is as follows:
Main menu module
As soon as learners enter the virtual lab, they first come across the menu interface (as shown in Figure 1). Considering that the use objects of virtual lab are students majoring in computer application from secondary vocational school, the design of the interface layout adheres to the principle of integrity and navigation. Integrity refers to whether the UI button is consistent with the background colour in the scene and whether learners feel the entire scene interface is a whole during use, so as to avoid unnecessary interference distracting learners' attention. Navigation refers to whether the set UI button can navigate accurately and ensure learners clearly choice the corresponding learning module according to their actual needs without confusion. A total of four UI interactive buttons are set in the main menu interface, correspondingly Basic Learning, Experimental Assembly, Experimental Feedback, and Experimental Help. Experimental Help provides a guide for learners who use the virtual lab for the first time and make them clear of the precautions during use to help learner can better them better finish the learning tasks. Learners can choose the corresponding learning module through gaze interaction. In terms of realizing gaze interaction, first we should import the UI resources made in Photoshop CS6 into the Unity project file, create a canvas in the scene where UI is placed, add a Box Collider to UI for collision detection and script code that triggers events to the UI and HeadCtrl in the Canvas for gaze interaction. The core script code is as follows
Basic learning module
The main function of basic learning module is to create a highly immersive and interactive virtual learning environment for learners. During the learning process, learners can use the somatosensory handle to wander in the scene from a first-person perspective and observe the hardware model from any angle according to their own needs (as shown in Figure 2), which can greatly make up for the teaching problem in the traditional experimental teaching process that students cannot observe a certain hardware at a close distance and for a long time due to the lack of hardware equipment. Learners can interact with each hardware model through the head-controlled gaze interaction during the learning process and form a preliminary concept in their minds about name, function, basic parameters and other related knowledge of the hardware (as shown in Figure 3) to carry out computer simulation assembly according to different hardware functions. If students want to learn certain hardware again, they can push the trigger button on the somatosensory handle to go back and study it repeatedly. Virtual tour is about making use of the displacement of the virtual camera to realize the effect of scene exploration. To be honest, to achieve virtual exploration is to form a mapping relation between the data of the smartphone accelerometer sensor and four directions of the virtual camera displacement, namely, front, back, left and right. First, create a Character Controller in the scene and add script code to it. Apply Controller.Move in the Update function to dynamically update the controller's location in the script and Input.acceleration method to obtain data of the phone transmitter. This method is mainly used to get the sub-scales of X axis, Y axis and Z axis of the mobile phone, which are represented by Input.acceleration.x, Input.acceleration.y and Input.acceleration.z. During the roaming process, the character controller cannot advance if it collides with other objects in the scene and only turn left, right or back to avoid penetrating other objects and to enhance the immersive experience. This function is realized by object engine provided by Unity3d to add a Collider to the virtual camera and 3D model in the scene. The core script code of scene roaming function is as follows: Vector3newGlobalPoint=movingPlatform.activePlatfor m.TransformPoint(movingPlatform.activeLocalPoint); moveDistance=(newGlobalPointmovingPlatform.activ e-GlobalPoint); if (moveDistance != Vector3.zero) controller.Move(moveDistance); } }
Virtual experiment module
The virtual experiment module consists of two parts: assembly video demonstration and simulative assembly. As for assembly video demonstration, some precautions and specific operating steps in the process of assembling computer hardware in the real environment are presented in the form of video. During use, learners can first interact with the 3D model of the blackboard in the lab through the ray emitted by the somatosensory handle. When the collision event is triggered, the teaching video of computer assembly will be played, forming a preliminary concept on learners' minds about how to install the machine correctly before the simulative assembly and laying a theoretical foundation for the subsequent simulative assembly. After watching the video, learners can simulate the installation by themselves. Given that the simulation assembly is a core part of the entire virtual laboratory, the design of the whole scene follows the principle of education, science and art. The quality of education and science are mainly reflected in the design of the content based on the teaching objectives in line with students' cognitive rules. The artistic nature is reflected in the layout, colour and light of the scene interface to arouse students' interest. Learning interest. In this part, a highly immersive virtual experiment environment is created to guide students to complete the simulative assembly experiment based on the theoretical knowledge previously learned. Taking the simulative installation of CPU as an example, all the hardware models on the console can be detected by using a somatosensory handle to control the direction of rays (as shown in Figure 4). Detecting the 3D model of Hardward means determining the desired hardware. By this time, the system records the current location of the detected hardware and the name of the data and takes the hardware as the launch point to re-emit the ray to the correct position of the motherboard. The ray will detect the collider set on the main board and feed back the corresponding Tag data to the API in the script. When the value obtained on the hardware is judged by the value obtained at a certain position on the main board and matches the condition, the hardware will move with the ray to the correct position on the motherboard and give correct feedback. If wrong, the corresponding prompt is given to help users successfully complete the experimental task (as shown in Figure 5)
Experimental feedback module
Experimental feedback module is a supplement and check after studying the first two modules, mainly consisting of common fault cases and experimental tests. The common hardware failure module is mainly presented in the form of classification, so that learners can learn more common faults of hardware in the learning process without confusion. This module is realized by controlling the ray direction by a somatosensory handle to collide with each UI in the scene and enter the corresponding learning unit. The effect is shown in Figure 6. The design of the experimental test module abides by the principle of openness, not only telling right or wrong but also providing more space for reflection and improvement, because effective feedback should be oriented and illuminating and indirectly solve some specific problems encountered by students during the experiment. Given that the presenting form is exercise and learners may take some notes in the learning process, this part is mainly presented in the form of single screen and interacts in the form of touch screen. The effect is shown in Figure 7.
Conclusion
In this paper, a virtual laboratory of computer hardware assembly is built in the VR environment based on the training needs of computer hardware assembly experiment teaching in traditional environment and by combining 3Ds Max and Unity3d technologies. The virtual laboratory uses VR glasses and somatosensory handles as the presentation platform and interaction mode so that learners can immerse themselves in the process. At the same time, the scene creation and teaching design of the three functional modules---basic learning module, virtual experiment module and experimental feedback enhance students' interest in learning, which not only can effectively make up for the shortcomings in the traditional experimental teaching process, but also solve the limitation that the existing virtual lab based on virtools technology can only run in Windows environment. In the subsequent researches, the interaction mode in virtual experimental process and the degree of refinement of the interaction operation will be further improved. In addition, AI speech recognition system will be taken into consideration so that learners can complete virtual experiments through multiple interaction modes and the authenticity and operability of the experiment will be improved. In this way learners can better complete the experimental tasks and improve their learning motivation and experience.
|
v3-fos-license
|
2021-04-29T06:17:18.872Z
|
2021-04-27T00:00:00.000
|
233429265
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.eneuro.org/content/eneuro/8/3/ENEURO.0029-21.2021.full.pdf",
"pdf_hash": "f8cf1f3aa47ce825feebe9c0c493065cfdbf0dcb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:721",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "2f2ce357c64f8e395011a652f77cf48cca3e6fbc",
"year": 2021
}
|
pes2o/s2orc
|
NMDA Receptors in Accumbal D1 Neurons Influence Chronic Sugar Consumption and Relapse
Abstract Glutamatergic input via NMDA and AMPA receptors within the mesolimbic dopamine (DA) pathway plays a critical role in the development of addictive behavior and relapse toward drugs of abuse. Although well-established for drugs of abuse, it is not clear whether glutamate receptors within the mesolimbic system are involved in mediating chronic consumption and relapse following abstinence from a non-drug reward. Here, we evaluated the contribution of mesolimbic glutamate receptors in mediating chronic sugar consumption and the sugar-deprivation effect (SDE), which is used as a measure of relapse-like behavior following abstinence. We studied four inducible mutant mouse lines lacking the GluA1 or GluN1 subunit in either DA transporter (DAT) or D1R-expressing neurons in an automated monitoring system for free-choice sugar drinking in the home cage. Mice lacking either GluA1 or GluN1 in D1R-expressing neurons (GluA1D1CreERT2 or GluN1D1CreERT2mice) have altered sugar consumption in both sexes, whereas GluA1DATCreERT2 and GluN1DATCreERT2do not differ from their respective littermate controls. In terms of relapse-like behavior, female GluN1D1CreERT2mice show a more pronounced SDE. Given that glutamate receptors within the mesolimbic system play a critical role in mediating relapse behavior of alcohol and other drugs of abuse, it is surprising that these receptors do not mediate the SDE, or in the case of female GluN1D1CreERT2 mice, show an opposing effect. We conclude that a relapse-like phenotype of sugar consumption differs from that of drugs of abuse on the molecular level, at least with respect to the contribution of mesolimbic glutamate receptors.
Introduction
It is assumed that the problematic chronic use of sugar, similar to chronic consumption of drugs of abuse, can lead to an addictive-like phenotype. However, the concept of "sugar addiction" is controversial and only a few studies have attempted to determine the addictive properties of sugar using rigorous scientific criteria (Avena et al., 2009;Wiss et al., 2018).
These studies suggest that behavioral phenotypes associated with chronic consumption of drugs of abuse and sugar consumption are similar with respect to withdrawal responses, compulsive over-consumption, craving, and loss of control (Avena et al., 2009;Wiss et al., 2018). After deprivation even relapse behavior can ensue. Thus, rats trained for 28 d to drink a sucrose solution and deprived for 14 d displayed a sugar-deprivation effect (SDE; Avena et al., 2005). In a more recent study (Wei et al., 2021), the addictive-like properties of sugar were systematically examined in male and female mice using established paradigms and models from the drug addiction field (Sanchis-Segura and Wei et al., 2021). In this study, female mice were more vulnerable to the addictivelike properties of sugar than male mice, showing higher long-term, excessive sugar drinking, and a more pronounced relapse-like sugar consumption as assessed by measuring the SDE (Wei et al., 2021). The deprivation effect is a measure of consumption during a relapse-like situation in the addiction field (Vengeliene et al., 2014;Spanagel, 2017).
Given the similarities of phenotypes for the chronic use of drugs of abuse and sugar, we speculated that there may also be similarities on the molecular level. In the addiction field, there is strong evidence that an interaction between the glutamatergic and mesolimbic dopamine (DA) systems is critical for mediating the reinforcing effects of drugs of abuse and consequently addictive behavior and relapse (Gass and Olive, 2008). In particular, glutamatergic synapses on DA neurons in the ventral tegmental area (VTA) and D1 receptor-expressing medium spiny neurons (MSNs) of the nucleus accumbens (NAc) both modulate the reinforcing properties of drugs of abuse and reward-dependent learning processes (Lüscher and Malenka, 2011;Lüscher, 2013;Scofield et al., 2016). In support of this, disruption of NMDA receptors in midbrain DA neurons abolishes enduring cocaineinduced plasticity in the NAc, thus reducing the incubation of craving and subsequent relapse behavior (Engblom et al., 2008;Mameli et al., 2009). Furthermore, using different mutant mouse lines that lack GluN1 and GluA1 receptor subunits in DA transporter (DAT) and D1R-expressing neurons, respectively, it was shown that GluN1 and GluA1 receptor subunits within these neuronal subpopulations mediate the alcohol-deprivation effect (ADE), which is a measure for relapse behavior (Eisenhardt et al., 2015a).
Some drug-induced neuroplastic changes within the mesolimbic system may also occur following consumption of natural rewards. For example, sucrose intake increases the phosphorylation and trafficking of accumbal AMPA receptor GluA1 subunits (Tukey et al., 2013) and alters the morphology of the MSNs (Klenowski et al., 2016).
In addition, other studies have shown that a natural reward experience activates VTA DA cells and alters AMPA and NMDA receptor distribution and function in the NAc similar to psychostimulants (Pitchers et al., 2012;Beloate et al., 2016). Therefore, mesolimbic glutamate receptors may, at least in part, be involved in mediating chronic sugar consumption and relapse following abstinence. Furthermore, there may be sex-dependent effects in sugar consumption and relapse, as female rats have increased levels of the AMPA receptor GluA1 and NMDA receptor NR1 subunits within the mesolimbic system after cocaine, methamphetamine or ethanol self-administration, relative to male rats (Devaud and Alele, 2004;Bechard et al., 2018;Pena-Bravo et al., 2019).
The aim of the present study was to systematically examine the involvement of AMPA and NMDA receptors within the mesolimbic system in mediating chronic longterm sugar consumption and the SDE in a sex-dependent manner. Here, we generated inducible mutant mice expressing GluN1 or GluA1 mutations under the control of the DAT (Slc6a3) or D1 (Drd1a) promoter following the previously described procedure (Mameli et al., 2009;Parkitna et al., 2009Parkitna et al., , 2010Eisenhardt et al., 2015a). We focused on AMPA and NMDA receptors in D1-receptorcontaining MSNs, as several studies (Hikida et al., 2010;Lobo and Nestler, 2011;Calipari et al., 2016;Soares-Cunha et al., 2016;Ma et al., 2018;Bilbao et al., 2020) suggest that this neuronal population is more involved in mediating the chronic effects of drug of abuse and natural rewards than D2-containing MSNs. Using a fully automated, highly precise home cage monitoring system (Eisenhardt et al., 2015b) for sugar drinking in mice, we systematically examined GluN1 DATCreERT2 , GluA1 DATCreERT2 , GluN1 D1CreERT2 , and GluA1 D1CreERT2 male and female mice in a long-term free-choice sugar drinking procedure and studied the SDE following an abstinence phase.
Animals
We generated mutant mice expressing GluN1 or GluA1 mutations under control of the DAT (Slc6a3) or D1 (Drd1a) promoter following the previously described procedure (Mameli et al., 2009;Parkitna et al., 2009Parkitna et al., , 2010Eisenhardt et al., 2015a). In short, GluN1 DATCreERT2 , GluA1 DATCreERT2 , GluN1 D1CreERT2 , and GluA1 D1CreERT2 mice were generated by crossing mice with an inducible Cre-recombinase under the DAT-or D1-promoter with mice carrying floxed alleles for GluN1or GluA1. The DATCreERT2 and D1CreERT2 mice were generated by recombining a construct containing an improved Cre-recombinase fused to a modified ligand binding domain of the estrogen receptor (CreERT2) into a bacterial artificial chromosome containing the gene encoding DAT (Slc6a3) or D1 (Drd1a) by recombineering. GluN1 fl/fl and GluA1 fl/fl mice, having exons 11-18 of the Grin1 or exon 11 of the Gria1 alleles, respectively, flanked with loxP sites were generated by gene targeting in embryonic stem cells (Zamanillo et al., 1999;Niewoehner et al., 2007). For induction of the mutation, mice were treated with 1 mg of tamoxifen dissolved in neutral oil intraperitoneally twice a day for five consecutive days (Erdmann et al., 2007). Mice were treated with tamoxifen at an age of 8-10 weeks old and were allowed to recuperate for at least three weeks before experiments started. For genotyping of the DATCreERT2 and D1CreERT2 transgene, we used the primers GGC TGG TGT GTC CAT CCC TGA A and GGT CAA ATC CAC AAA GCC TGG CA. The GluN1 and GluA1 flox variants were genotyped using the primers GGA CAG CCC CTG GAA GCA AAA T and GGA CCA GGA CTT GCA GTC CAA AT for GluN1, and CAC TCA CAG CAA TGA AGC AGG AC and CTG CCT GGG TAA AGT GAC TTG G for GluA1. For all experiments, adult male and female GluN1 DATCreERT2 , GluA1 DATCreERT2 , GluN1 D1CreERT2 , and GluA1 D1CreERT2 and their wild-type littermate mice from at least six consecutive backcrosses with C57BL/6N were used (8-10 weeks at the beginning of the experiments). As controls, floxed littermates not carrying the Cre-recombinase were used.
Mice were single-housed in standard hanging cages at 21 6 1°C and 50 6 5% relative humidity on a reversed 12/12 h light/dark cycle, with lights on at 7:30 P.M. The animals were provided with standard rodent food (Altromin Spezialfutter GmbH & Co, LASQC diet Rod16-H. Composition: cereals, vegetable by-products, minerals, oils and fats, yeast; crude nutrients: 16.30% crude protein, 4.30% crude fat, 4.30% crude fiber, 7.00% crude ash), a bottle containing 5% (w/v) sugar solution during the long-term sugar paradigms (see below for details) and tap water ad libitum. All the experiments were performed in the dark cycle. All mice were handled on a daily basis before starting the experiments and were habituated to the behavioral testing environments. Procedures for this study complied with the regulations covering animal experimentation within the European Union (European Communities Council Directive 86/609/EEC) and Germany (Deutsches Tierschutzgesetz) and the experiment was approved by the German animal welfare authorities (Regierungspräsidium Karlsruhe).
Home cage two-bottle free-choice sugar drinking and assessment of relapse-like drinking by means of the SDE For this experiment 360 mice were used in total, 49 GluN1 DATCreERT2 (25 males and 24 females), 41 GluN1 D1CreERT2 (21 males and 20 females), 53 GluA1 DATCreERT2 (26 males and 27 females), 37 GluA1 D1CreERT2 (17 males and 20 females), and 180 respective control littermates (90 males and 90 females) were used. Mice had continuous free-choice access to a bottle containing a sugar solution (sucrose 5% w/v) and a bottle with tap water in the homecage for eight weeks. During the last 3 d of sugar exposure, sugar and water intake and locomotion were recorded using a drinkometer system (Eisenhardt et al., 2015a, b;Bilbao et al., 2019) and were used as baseline for comparison with the SDE. Mice were afterward deprived from sugar for 12-15 d, during which they only had access to two bottles of tap water. After the deprivation period, the SDE was tested for 24 h by reintroducing the sugar bottle.
Assessment of drinking patterns by a fully automated drinkometer device
Sugar and water intake, preference over water and locomotor activity were measured during baseline and SDE measurements with a fully automated, highly precise monitoring system as described previously (INFRA-E-MOTION; Eisenhardt et al., 2015a,b;Bilbao et al., 2019). Briefly, during recording, the standard lid of the mouse home cage was replaced with the drinkometer lid containing two holes for special drinkometer bottles with a curved bottleneck and different tips for water (0.8 mm opening) and sugar (1.5 mm opening) solutions. The drinkometer system was configured to sample every 4 min, the amount (g) of sugar and water each mouse consumed. Sugar and water intake, preference over water and locomotor activity were calculated every 4 h to assess circadian drinking patterns and to obtain a temporal dissection of the SDE. The SDE in mice is usually shortlasting (Vengeliene et al., 2014) and therefore the first 4 h during the SDE provide the most valid measurement (Eisenhardt et al., 2015a).
Sugar (g/kg) and water (ml) intake, sugar preference (% of total fluid intake) and locomotor activity were calculated per day. During baseline and SDE measurements, sugar and water intake and locomotion were additionally calculated in 4-h time intervals. Baselines were calculated as the mean of the last 3 d of baseline recording.
Statistics
Statistical analyses were performed by one-way or twoway ANOVA with repeated measures and Newman-Keuls test for post hoc comparisons using Statistica 10 (StatSoft). All values are given as mean 6 SEM, and statistical significance was set at p , 0.05.
The ANOVA model for the long-term free-choice home cage drinking and SDE contained the fixed effects of sugar deprivation (baseline and relapse), gene (wild-type and GluA1 or GluN1), and the interaction deprivation  gene.
Specific GluN1 receptor subunit gene inactivation
After eight weeks of chronic, 24-h free-choice sugar exposure, male and female GluN1 DATCreERT2 mice did no differ from their wild-type littermates in the daily sugar intake A period of sugar deprivation significantly increased the sugar intake in all mutants, and the respective wild-type mice (Fig. 1C,I), indicative of a SDE (two-way ANOVA, deprivation effect for Fig. 1C: F (11,539) = 62.3, p , 0.0001 and for Fig. 1I: F (11,517) = 72.8, p , 0.0001). The dissection of the baseline and SDE drinking into 4-h time interval points showed the typical diurnal pattern of intake, characterized by higher drinking during the dark, active phase, and lower drinking during the light, inactive phase of the day. Specifically, during the SDE (i.e., relapse), sugar intake was strongly pronounced during the first 4-8 h of re-exposure and lasted not longer than 24 h in all genotypes (twoway ANOVA, gene effect for Fig. 1C: F (1,49 In contrast to the DAT-containing neurons, GluN1 mutation onto D1-containing neurons (GluN1 D1CreERT2 mice) had an effect on chronic sugar drinking. As depicted in Figure 2, male ( Fig. 2A) and female (Fig. 2G) GluN1 D1CreERT2 mice showed a significant decrease in the total, 24-h free-choice sugar drinking (one-way ANOVA for Fig When calculating the percentage of relapse over baseline during the first 4 h of sugar re-exposure for intake and locomotion, male GluN1 D1CreERT2 mice did not differ from their wild-type littermates (one-way ANOVA for Fig. 2E: F (1,37) = 1.9, p = 0.2 and for Fig. 2F: F (1,37) = 0.3, p = 0.6). However, in females, the SDE magnitude was higher for the intake (Fig. 2K: F (1,37) = 8, p , 0.01) and lower for the locomotion (Fig. 2L: F (1,37) = 6.2, p , 0.05).
Specific GluA1 receptor subunit gene inactivation
Deletion of the AMPA receptor GluA1 subunit onto DAT-containing neurons did not have any effect on chronic sugar drinking, as the phenotypes displayed by both male (Fig. 3A,B) and female (Fig. 3G,H) GluA1 DATCreERT2 mice did not differ from their respective wild-type littermates after eight weeks of chronic, 24-h free-choice sugar exposure. As shown in Figure 3, daily sugar consumption (one-way ANOVA for Deletion of the AMPA receptor GluA1 subunit onto DAT-containing neurons did not have a role in relapse to sugar, as measured by the SDE (Fig. 3C-F, I-L). During baseline and SDE, the intake during 4-h intervals resembled the one already observed with GluN1 mutations. That is, all mice showed the typical diurnal pattern of consumption, and a strongly pronounced sugar intake during the first 4-8 h of re-exposure which lasted not longer than 24 h. Statistical analysis showed a deprivation effect for male ( Fig. 3C: F (11,550) As observed with the GluN1 mutant mice, GluA1 mutation onto D1-containing neurons (in contrast to the DATcontaining neurons), had an effect on chronic sugar drinking behavior (Fig. 4). That is, regardless of the sex, GluA1 D1CreERT2 mice showed a significant increase in the total, 24-h free-choice sugar drinking (one-way ANOVA, Fig. 4A: F (1,35) = 4.3, p , 0.05 and Fig. 4G: F (1,38) = 17.1, p , 0.001). Regarding locomotor activity, it did not change in male mutants (Fig. 4B: F (1,35) = 0.9, p = 0.3), but it was significantly increased in female GluA1 D1CreERT2 mice ( Fig. 4H: F (1,38) = 6.9, p , 0.05).
Discussion
Here, we report on three findings in different inducible mouse mutant lines lacking either GluN1 or GluA1 receptor subunits in DAT or D1-containing neurons in a chronic free-choice sugar consumption paradigm and the SDE model. First, long-term sugar intake is modulated by AMPA and NMDA receptors in D1-containing neurons in an opposing manner. The specific deletion of the GluA1 subunit, which yields non-functional AMPA receptors in primarily D1-containing MSNs, increases excessive sugar drinking in male and female mice, whereas mice with inducible GluN1 receptor subunit deletion in D1-expressing neurons show significantly reduced chronic sugar intake. Second, neither AMPA nor NMDA receptors in DA neurons influence the development and maintenance of sugar consumption. Third, female GluN1 D1CreERT2 mice show a more pronounced relapse-like behavior in the SDE model.
The four genetic mouse models used here have some advantages over other approaches for gene targeting, allowing a more precise demonstration of the functional role regarding the gene of interest. First, these mutant models have high specificity of the deletion of GluN1 or GluA1 in DAT-expressing or D1-expressing neurons, as shown by previous co-localization studies (Engblom et al., 2008;Eisenhardt et al., 2015a). Cre-expression patterns fit with that described for DAT with strong expression in the VTA and for D1Rs with strong expression in the NAc and dorsal striatum. From previous studies (Engblom et al., 2008;Eisenhardt et al., 2015a), we conclude that we primarily have an ablation of individual glutamate receptor subunits within the mesolimbic DA system in our four mutant mouse lines. Second, the use of a temporally controlled gene deletion (induced by tamoxifen injections) circumvents potential developmental compensatory mechanisms, which may offset the loss of the gene and consequently mask its functional role.
As previously reported (Wei et al., 2021), mice show a typical diurnal pattern of sugar consumption. In all four mouse lines, such a pattern is maintained, and females showed consistently higher intake of a sugar solution relative to male mice. Long-term sugar intake was significantly more pronounced in GluA1 D1CreERT2 male and female mice, suggesting that functional AMPA receptors onto D1-containing neurons play a role in the regulation of excessive sugar consumption. Our finding largely agrees with a previous report that studied the regulation of AMPA receptors on NAc synapses by sucrose intake. Tukey et al. (2013) showed that repeated daily ingestion of a sucrose solution potentiated accumbal synapses through incorporation of calcium permeable AMPA receptors. In contrast, deletion of functional NMDA receptors on D1-expressing neurons reduced excessive sugar intake in both sexes. This finding is consistent with the few studies to date that have addressed the role of NMDA receptors in food or sugar binging, which have reported a reduction after systemic administration of the NMDA receptor antagonist memantine (Bisaga et al., 2008;Popik et al., 2011). A reduction of binge eating following memantine treatment was also seen in a human study (Brennan et al., 2008). These results and the fact that our GluN1 DATCreERT2 and GluN1 D1CreERT2 mutant mice showed no change in sugar binging suggest that non-mesolimbic brain regions may also contribute to sugar binging.
In terms of relapse-like behavior, we tested the four mutant mouse lines in the SDE model and found that neither AMPA nor NMDA receptors in DA neurons influenced augmented sugar consumption following a deprivation period. However, female GluN1 D1CreERT2 mutant mice showed a more pronounced SDE. In contrast, relapse behavior to drugs of abuse is strongly under the control of mesolimbic glutamate receptors, especially the ADE, like the SDE a measure of relapse-like behavior (Vengeliene et al., 2014). All four mouse mutant lines tested here were also tested in a previous study for alcohol relapse behavior. All mutant mice showed a significantly reduced ADE, results supported by intra-VTA and intra-accumbal pharmacological blockade of AMPA and NMDA receptors (Eisenhardt et al., 2015a). Similarly, the AMPA antagonist GYKI 54 266 completely abolished the ADE in rats and a variety of NMDA receptor antagonists dose-dependently inhibited the ADE in rats (Hölter et al., 1996(Hölter et al., , 2000Vengeliene et al., 2005;Kolik et al., 2017). For other drugs of abuse there are also consistent findings demonstrating that mesolimbic AMPA and NMDA receptors are critical for relapse behavior. Accumbal blockade of AMPA or NMDA receptors by various antagonists blocks relapse behavior for cocaine, heroin, and nicotine (Bäckström and Hyytiä, 2007;LaLumiere and Kalivas, 2008;Gipson et al., 2013;Doyle et al., 2014) and deletion of NMDA receptors in D1-containing neurons reduces the incubation of cocaine-seeking and relapse (Mameli et al., 2009). In conclusion, pharmacological inhibition or genetic inactivation of acccumbal NMDA receptors reduces relapse to drugs of abuse/alcohol, whereas GluN1 D1CreERT2 female mutant mice exhibit an augmented SDE.
In summary, a previous study demonstrated the occurrence of an addictive-like phenotype for sugar in male and female mice similar to that of drugs of abuse (Wei et al., 2021). Here, we show that mesolimbic AMPA and NMDA receptor do not play a critical role in relapse to sugar consumption. These findings differentiate a natural reward from drugs of abuse on the molecular level, as mesolimbic NMDA and AMPA receptors are essential for drug-induced neuroplasticity and subsequent relapse behavior.
|
v3-fos-license
|
2013-12-19T22:17:40.000Z
|
2013-12-19T00:00:00.000
|
119276385
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11207-013-0463-9.pdf",
"pdf_hash": "222a848b7754664bebfd2dd2ce5380240f4acf44",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:722",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "222a848b7754664bebfd2dd2ce5380240f4acf44",
"year": 2014
}
|
pes2o/s2orc
|
Investigation of the X-ray Emission of the Large Arcade Flare of 2 March 1993
A large arcade flare of 2 March 1993 has been investigated using X-ray observations recorded by the {\sl Yohkoh} and GOES satellites and the {\sl Compton Gamma Ray Observatory}. We analyzed quasi-periodicity of the hard-X-ray (HXR) pulses in the flare impulsive phase and found close similarity between the quasi-periodic sequence of the pulses with that observed in another large arcade flare of 2 November 1991. This similarity helped to explain the strong HXR pulses which were recorded at the end of the impulsive phase, as due to an inflow of dense plasma (coming from the chromospheric evaporation) into the acceleration volume inside the cusp. In HXR images a high flaring loop was seen with a triangular cusp structure at the top, where the electrons were efficiently accelerated. The sequence of HXR images allowed us to investigate complicated changes in the precipitation of the accelerated electrons toward the flare footpoints. We have shown that all these impulsive-phase observations can be easily explained in terms of the model of electron acceleration in oscillating magnetic traps located within the cusp structure. Some soft-X-ray (SXR) images were available for the late decay phase. They show a long arcade of SXR loops. Important information about the evolution of the flare during the slow decay phase is contained in the time variation of the temperature, $T(t)$, and emission measure, EM$(t)$. This information is the following: i) weak heating occurs during the slow decay phase and it slowly decreases; ii) the decrease in the heating determines slow and smooth decrease in EM; iii) the coupling between the heating and the amount of the hot plasma makes the flare evolve along a sequence of quasi-steady states during the slow decay phase (QSS evolution).
Introduction
Quasi-periodic variations were observed in the hard X-ray (HXR) emission of many flares (Lipa, 1978; see also the review of Nakariakov and Melnikov (2009) and references therein). In our previous papers (Jakimiec and Tomczak, 2010 (Paper I), 2012 (Paper II)) we investigated quasi-periodic oscillations in flares with periods P = 10−60 s, but in Paper I we have found three flares with periods P > 120 s. They turned out to be large arcade flares. Investigation of the quasiperiodic oscillations in such large flares is very important, since their large sizes allow us to investigate the structure of the oscillation volume more comprehensively. Unfortunately, appropriate observations of the X-ray oscillations in such large flares are very rare. In Paper III (Jakimiec and Tomczak, 2013) we investigated such a large arcade flare of 2 November 1991. Its HXR light curve is shown in Figure 1a. We have found there a direct observational evidence that the strong HXR pulse at 16:34-16:35 UT is the result of dense plasma coming from the chromospheric evaporation and flowing into the acceleration volume located within the loop-top cusp structure.
In Paper III we have also found that i) the precipitation of accelerated electrons from the cusp structure is strongly asymmetric, i.e. there is strong difference in the precipitation into the northern and southern arms ("legs") of the flaring loop; ii) there are significant changes of the precipitation with time; iii) the properties of precipitation depend on the energy of accelerated electrons.
It has been shown that these complicated properties of the precipitation can be easily explained in terms of our model of oscillating magnetic traps (see Paper III).
In the present paper we investigate a large arcade flare of 2 March 1993. Its HXR light curve is shown in Figure 1b. We see close similarity of the light curves in Figures 1a and 1b which indicates that there is close similarity in the impulsive-phase development of these two flares. Section 2 contains the analysis of observations, Section 3 presents the discussion of the decay phase of the flare, and summary of the paper is given in Section 4.
Observations and Their Analysis
In the present paper we investigate a large arcade flare which occurred at the eastern limb on 2 March 1993. It was a long duration event (LDE) of GOES class M5.1 (see Figure 2). The HXR light curves, recorded by the Yohkoh Hard X-ray Telescope (HXT; Kosugi et al., 1991) and the Compton Gamma Ray Observatory Burst and Transient Source Experiment (CGRO/BATSE; Fishman et al., 1992) are shown in Figures 3 and 4a, respectively. The nominal energy range of the BATSE observations is hν > 25 keV, but in Paper III we have found that actual BATSE energy range was hν > 33 keV. The HXRs began to rise at 21:01 UT (beginning of the impulsive phase). Unfortunately, there were no Yohkoh soft-X-ray (SXR) observations for the flare impulsive phase. Some SXR images were available only for the late phase of the flare decay (see Section 2.3).
Analysis of Quasi-periodicity of the HXR Pulses
We have investigated the quasi-periodicity of the HXR pulses using our standard method described in Papers II and III. We have calculated the normalized time series: where F (t) is the measured HXR flux andF (t) is the running average of F (t). The red curve in Figure 4a showsF (t) calculated with averaging time δt = 120 s. The normalized time series, S(t), is shown in Figure 4b. Next we have measured time intervals, P i , between successive HXR peaks and calculated the period, P = P i , and its standard (r.m.s.) deviation, σ(P ). Our criterion of a quasi-periodicity is σ(P )/P ≪ 1. The values of P and σ(P ) are given in Table 1. Figures 3 and 4, together with Table 1, show clear quasi-periodicity of the HXR pulses during the impulsive phase rise (21:01-21:12 UT). Table 1 also shows that the period P is longer near the impulsive phase maximum than during the impulsive phase rise. We explain this effect as being due to quick increase in the density inside the acceleration volume, which causes significant decrease in the Alfvén speed, v A (see Section 3).
Investigation of the HXR Images
The sensitivity of the Yohkoh HXT was moderate and the counting rates were low in this flare (see Figure 3); therefore it was necessary to apply rather long integration time in the reconstruction of HXR images. Figure 5 shows the 23-33 keV image recorded during the maximum of HXR emission. We see the high flaring loop above the eastern solar limb (its altitude was about 45 Mm). There was a triangular cusp structure, BPC, at the top and strong footpoint sources, F1 and F2, on the solar disc. The position of the footpoints indicates that the plane of the flaring loop was tilted from the plane of the image, i.e. that there is a significant geometrical foreshortening in the northsouth dimension of the loop. We have seen in Papers I and III that the sources B and C were located at the places where the cusp structure was connected with the arcade channel. Figure 6 shows a sequence of 23-33 keV images. This long sequence allowed us to investigate asymmetry in electron distribution and its changes in the precipitation in some detail. Between 21:06 and 21:08 UT (Figures 6a-c) there was a clear coupling between the top source P and the footpoint F1. During this time interval a gradual increase in the intensity of the P source was seen, which is certainly the result of the increase in density due to the chromospheric evaporation from the footpoint F1. About 21:10 UT ( Figure 6d) had begun strong precipitation toward the footpoint F2. This caused chromospheric evaporation, inflow of dense plasma into the acceleration volume, increase in the number of accelerated electrons and generation of intense HXR pulses. This moment of time (≈21:10 UT) is analogous to 16:31 UT in the flare of 2 November 1991 (see Paper III) when the footpoint source F had appeared (see Figure 6 in that paper). The maximum of compression is different in different traps, i.e. their χ min values are different. The electrons inside the traps undergoing weaker compression (higher χ min values) achieved lower energies (≈15 keV) and the efficiency of their precipitation was low (see Figure 7). The electrons within the traps undergoing strong compression (low χ min values) achieved higher energies (≈25 keV) and they efficiently precipitated toward the footpoints ( Figure 6). This means that the ensemble of the traps which was responsible for generation of most of the ≈15 keV electrons, was different from the ensemble providing the ≈25 keV electrons. Figure 7 also shows that most of the ≈15 keV electrons emitted their energy and were thermalized within the BPC cusp structure. The strong source B in these images indicates that the traps which dominated in the generation of ≈15 keV electrons had good connection with the arcade channel at B. Characteristic features of the HXR impulsive phase are the asymmetry in the precipitation of accelerated electrons from the cusp structure toward the footpoints and changes in the asymmetry with time and with the energy of the electrons (see Papers I-III). These features are clearly seen in Figure 6 of the present paper. two footpoints was again of similar magnitude. In later images (after 21:16 UT, Figure 6h), the precipitation toward the footpoint F2 dominated.
These complicated variations in the asymmetry of precipitation of accelerated electrons can be adequately explained in terms of our model of oscillating magnetic traps.
i) The asymmetry in the precipitation for an HXR pulse may arise if the axis of symmetry of the magnetic traps (approximately, it is the line joining source P with the middle of the segment BC) is not perpendicular to BC. A small deviation of the axis of symmetry from being perpendicular to BC will introduce small difference in the maximum compression (χ min ) at the opposite ends of the traps. These small differences in the χ min values lead to significant differences in the precipitation of electrons, since the efficiency of precipitation steeply depends on χ min (see Section 3 in Paper III). Hence, large asymmetry in precipitation may result from moderate deviation of the axis of symmetry of the magnetic traps from being perpendicular to the line BC. An observation that footpoint intensities F1>F2 means that (χ min ) 1 < (χ min ) 2 in the cusp, F1≈F2 means (χ min ) 1 = (χ min ) 2 , and F1<F2 means (χ min ) 1 > (χ min ) 2 .
ii) Changes in the asymmetry of precipitation (F1/F2 ratio) mean that the direction of the axis of symmetry of the traps changes in time. Again, small changes in the direction can cause large changes in the asymmetry of precipitation, since small changes in χ min lead to large changes in precipitation.
X-ray images investigated in Papers I and III have shown two important features of the large arcade flares: i) The triangular ("cusp") BPC structure was magnetically connected with the arcade channel and therefore the accelerated electrons were able to penetrate into the channel and heated it.
ii) The triangular cusp covered only a part of the length of arcade channel, i.e. the channel was significantly longer than the extension of the cusp measured along the channel (see an example in Figure 8). This indicates that the energy which penetrated into the arcade channel was efficiently transferred along the channel by thermal conduction. Unfortunately, the arcade channel could not be seen during the impulsive phase of the flare of 2 March 1993, since no SXR images are available for this phase.
Investigation of the Flare Decay Phase
There were no SXR observations for the impulsive phase of the investigated flare of 2 March 1993. Some SXR images, recorded with the thin Al.1 filter, were available only for the late decay phase of the flare (Figure 9). We see a long arcade of SXR loops at the altitude of about 95 Mm. The response function of the Yohkoh SXT observations with the Al.1 filter weakly depends on temperature in the wide range of temperatures T ≈ 2.5 − 20 MK (Tsuneta et al., 1991). Therefore all plasma having T > 2.5 MK efficiently contributed to the recorded emission. The distribution of the intensity in the SXR images displays the distribution of the SXR emitting plasma. Figure 10 shows the comparison of a SXR arcade image with an impulsivephase HXR image. The HXR image was recorded at 21:08 UT, when the arcade channel was at the level of B and C sources (see Figure 5). Figure 10 shows that the BPC cusp structure covered only a small part of the arcade length (compare this figure with Figure 8).
In Figure 11 we show the time variation of the temperature, T , and emission measure, EM, derived from the standard GOES observations (Figure 2). During the quick increase of temperature (21:00-21:20 UT; impulsive phase) the cusp structure and the arcade channel had been filled with the plasma coming from the chromospheric evaporation.
Between 21:20 and 22:00 UT the temperature decreased, but the emission measure continued to increase. This means that the energy release decreased, but it was sufficiently high to maintain the chromospheric evaporation. The decay phase of the investigated flare started at about 22:00 UT.
In steady-state loops the following condition is fulfilled: (E H is the heating rate per unit volume, E R is the radiative loss per unit volume, the integrals are over the whole volume of a loop), i.e. the total heating is balanced by the total radiative loss from the loop. The "scaling law" of Rosner, Tucker, and Vaiana (1978) is an analytical expression of the energy balance [Equation (2)]. It can be written in the following form (see Bak-Stȩślicka and Jakimiec, 2005): where N is the mean electron number density in cm −3 , T is the temperature at the top of the loop in K, and L is the semilength of the loop in cm. It is very important that coronal loops have an efficient mechanism which allows them to satisfy the energy balance condition [Equation (2)], i.e. to achieve a steady state. Here we briefly describe this mechanism. If E H dV > E R dV then the flux of energy which reaches the footpoints by thermal conduction is large and it generates chromospheric evaporation. This increases density in the loop and increases E R dV which allows the loop to achieve the energy balance [Equation (2)]. If, on the other hand, E H dV < E R dV the flux of energy which reaches the footpoints is low, since most of the energy is emitted above the footpoints. Therefore the pressure at the footpoints is too low to balance the weight of plasma contained in the loop and some amount of plasma precipitates to the chromosphere (this is seen in numerical simulations described in Jakimiec et al. (1992) ). The density in the loop and E R dV decrease and this allows the loop to achieve the energy balance [Equation (2)].
This mechanism of self-regulation of coronal loops is quick in comparison with slow evolution of a flare during the slow decay phase. For example, numerical simulations have shown that a loop of semilength L = 20 Mm fits a change in heating rate in the time of only 5 min (Jakimiec et al., 1992).
In previous papers we investigated flare evolution in logT vs. logN or logT vs. log √ EM diagnostic diagrams (see Jakimiec and Bak-Stȩślicka (2011) and references therein). We have found that during the slow decay phase flares evolve along the line of steady-state loops, i.e. the line described by Equation (3) with L = const. This indicates that during this evolution flares are close to the steady state [Equation (2)] with slowly decreasing heating, i.e. decreasing values of the integrals in Equation (2) (quasi-steady-state, or QSS, evolution of the loops). These results have been supported by numerical simulations of loops with slowly decreasing heating (Jakimiec et al., 1992). These results can be summarized as follows: i) During the slow decay phase (after 22:30 UT in Figure 11) the loops seen in Figure 9 slowly evolved along a sequence of steady states (QSS evolution).
ii) The loops were continuously heated to support this QSS evolution.
How this continuous heating of the loops seen in Figure 9 was performed? Our proposed explanation is the following: i) Magnetic reconnection occurred at the tops of the loops and between the loops and arcade channel. This provided continuous heating of the loops. ii) New magnetic loops which were generated by the reconnection comprised only a small fraction of the volume of loops seen in Figure 9 and they quickly achieved the energy balance [Equation (2)] due to the self-regulation described above. Therefore the reconnecting loops did not disturb much the slow QSS evolution of the loops. This interpretation is supported by the close correlation between the temperature and emission measure during the slow decay phase (d log T /d log √ EM) ≈ 0.5 in agreement with Equation (2) and numerical simulations described in Jakimiec et al., 1992).
The two main processes of thermal energy loss from a flare kernel are thermal conduction and radiative losses. The rates of these losses, calculated per unit volume, are (see Bak-Stȩślicka and Jakimiec, 2005): and where a is the radius of the X-ray kernel, L is the length of the flaring loop "legs" and Φ(T ) is the radiative loss function. When the observed temperature changes are slow, i.e. the values of dT /dt are small, the loss of energy from the kernel is compensated for by the heating E H : We have applied Equations (3)-(6) to the beginning of the decay phase (22:30 UT in Table 2) and also to the temperature maximum (21:18 UT), since the change in the heating was slow also then (small dT /dt means small dE H /dt). In Figure 5 we have measured the radius of the cusp BPC structure, a ≈ 7.9 Mm, and the length of the loop "legs", L ≈ 26 Mm. Using Equations (3)-(6) we have calculated the values of physical parameters which are given in Table 2. Table 2 shows that the energy release within the cusp (magnetic reconnection at the edges of the cusp, acceleration of the electrons and their thermalization) steeply decreased after the temperature maximum. Figure 11a shows that after 22:00 UT the decrease in the energy release was much slower which allowed the flare to develop the long QSS decay phase.
We also see in Table 2 that E C > E R , i.e. the conductive loss of energy was larger than the radiative loss (see also Ko lomański, 2007). Hence, we can use the following simple approximation in further estimates: E H ≈ E C ∼ T 3.5 . This gives T ∼ (E H ) 0.286 . The small value of the power index in the last formula implies that significant changes in E H induce only weak changes in T (see Table 2). Figure 9 indicates that the energy release during the decay phase occurred in the arcade of SXR loops. It is most probable that the weak energy release which maintained the temperature at T > 6 MK was mostly due to reconnection between the arcade loops and the arcade channel. The bright arcade loops in Figure 9 are the places where the reconnection was most efficient.
We have applied the above Equations (3)-(6) to one of the arcade loops. We have measured the radius in the southern loop-top kernel, a ≈ 11 Mm, and the length of the "leg" connecting the kernel with the loop footpoint, L ≈ 95 Mm. We have obtained E H ≈ 0.023 erg cm −3 s −1 and N ≈ 2.0 × 10 9 cm −3 . Hence, to maintain the long-duration decay phase, weak energy release in the arcade loops was sufficient.
Important information is contained in the time variation of the emission measure, EM(t) (Figure 11b). This figure shows that during the impulsive phase a large amount of hot plasma had been accumulated within the flaring system (the flaring loop and arcade channel), and during the decay phase this amount of plasma slowly and smoothly decreased (the small peak in EM(t) about 00:25 UT is of instrumental origin; the peaks after 03:00 UT are due to other flares). The relationship between the physical parameters during the decay phase was the following: where T ∼ E 0.286 H , N ∼ T 2 , and EM ∼ N 2 .
(Here changes in the loop semilength, L, and changes in the volume, V , of SXR emitting plasma have been assumed to be of minor importance.) Combination of the relationships [Equation (8)] gives: This simple relationship stresses the fact that the slow decrease in EM during the decay phase is due to slow decrease in the heating E H . In other words, the observed slow and smooth decrease of the emission measure, EM(t), indicates that the thermal energy release, E H (t), decreased slowly and smoothly during the long decay phase. It is most probable that this weak energy release was mostly due to the reconnection between the magnetic arcade loops and the arcade channel.
Discussion
In Paper III we have found clear observational evidence that the strong HXR pulses at the HXR maximum (see Figure 1a) were the result of inflow of dense plasma (coming from the chromospheric evaporation) into the acceleration volume inside the cusp structure. For the investigated flare of 2 March 1993 we were not able to monitor the inflow of plasma into the cusp, since we had no SXR imaging observations for the flare impulsive phase. We can, however, obtain simple estimates for the increase in density inside the cusp using only the HXR light curves.
In Table 1 we see that the time interval, P i , between the pulses increased during the HXR maximum (P 2 /P 1 = 1.42, where P 1 is the period before 21:10 UT, and P 2 is the period during the HXR maximum). We assume that this increase in the period is the result of increase in density and we consider two extreme cases: a) We assume that B 2 /8π ≫ p inside the traps, where B is the magnetic field strength and p is the pressure. Then the magnetic field does not change significantly during the increase in pressure in the traps and we have: or where v A is the Alfvén speed.
b) We assume that B 2 /8π ≈ p. Then the increase in pressure causes broadening of the traps and the magnetic field in the traps decreases. For this estimate we assumed that where B 1 is the magnetic field strength before the HXR maximum and B 2 is the field strength during the maximum. Then Combination of Equations (12) and (13) gives: and therefore i.e.
On the other hand, we can estimate the ratio of the densities directly from the ratio, J 2 /J 1 , of the HXR fluxes (J 1 is the flux between 21:05-21:08 UT and J 2 is the flux at the HXR maximum). For the 23-33 keV flux the ratio was J 2 /J 1 ≈ 2.0 (see Figure 3). During these time intervals the emission came mainly from the flare footpoints (see Figure 6), hence it is proportional to the number of precipitating electrons per second which, in turn, is proportional to the electron number density, N , within the cusp. Hence, The last estimate is independent of the previous two. Therefore, we consider these estimates to be a confirmation that the density has quickly increased by a factor between 1.4 and 2.0 (the mean value is 1.8±0.2) and the longer period, P i , at the HXR maximum is the result of this increase in density.
Summary
The sequence of HXR images for the impulsive phase of 2 March 1993 flare has allowed us to investigate asymmetry in the precipitation of accelerated electrons from the acceleration volume within the cusp toward the flare footpoints. According to our model of acceleration of the electrons in oscillating magnetic traps, the precipitation was most efficient during the maximum compression of the traps (see Papers I and III). The asymmetry of precipitation shows that the maximum of compression was different at the opposite ends of the traps, i.e. the axis of symmetry of the traps was slightly inclined toward one of the ends of the traps (see Paper III). The changes in the asymmetry from one HXR pulse to another indicate that the inclination of the axis of symmetry of the traps changed. SXR images, which were available for the late decay phase, show a long arcade of SXR loops (Figure 9). In Figure 10 a contrast between the "slim" shape of the flaring loop and the long arcade is seen. This contrast is enhanced by the fact that there were no SXR images for the flare impulsive phase and therefore the arcade channel was not seen during this phase. In other flares such SXR images show a connection between the cusp and arcade channel and the heating of the channel by the cusp (see Figure 8). Similar cases like in Figure 8 will be shown in our next paper. Important information about the evolution of a flare during slow decay phase is contained in the time variation of the temperature, T (t), and emission measure, EM(t). This information is the following: i) weak heating occurs during the slow decay phase and it slowly decreases; ii) the decrease in the heating determines slow and smooth decrease in EM; iii) the coupling between the heating and the amount of the hot plasma makes the flare evolve along a sequence of quasi-steady states during the slow decay phase (QSS evolution).
|
v3-fos-license
|
2023-01-17T16:43:18.291Z
|
2023-01-01T00:00:00.000
|
255920945
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/20/2/1442/pdf?version=1673539040",
"pdf_hash": "0a42748ec5e7cfeb223d8325af4db2b0713dbb84",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:723",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "ff463a14dca1c47cc48b2e3210adb31e9a95aa84",
"year": 2023
}
|
pes2o/s2orc
|
Does the Digital Economy Increase Green TFP in Cities?
COVID-19 accelerated the growth of the digital economy and digital transformation across the globe. Meanwhile, it also created a higher demand for productivity in the real economy. Hence, the correlation between the digital economy and green productivity is worth studying as COVID-19 prevention becomes the norm. The digital economy overcomes the limitations imposed by traditional factors of production on economic growth and empowers innovative R&D and resource allocation in all aspects. This study delved into the digital economy by focusing on its green value at different levels of development. The study gathered the green-productivity indices and the principal components of the digital economy for each prefecture-level city in China from 2011 to 2019 and meticulously portrayed their trends in spatial and temporal figures. Meanwhile, regression models were used to verify the mechanism through which digital-economy development influences the changes in green productivity. The results showed that: (1) a higher level of digital economy helps to increase urban green total-factor productivity (GTFP) and that the conclusions of this paper still held after potential endogeneity problems were solved through the instrumental-variables approach; (2) the digital economy will drive an increase in urban GTFP by upgrading firms’ production technologies and that digital-economy development encourages green patent applications from firms; and (3) as the digital economy develops, it will also drive urban GTFP increases by removing polluting enterprises from the market and that the higher the level of digital-economy development, the greater the number and probability of polluting enterprises exiting the market. In view of this study’s results, the government should increase the importance of the digital economy, strengthen the role of the digital economy in promoting urban green development, and provide more guidance on regional green development with the help of the digital economy.
Introduction
While China's economic development records remarkable achievements, it also faces pressure from both serious resource depletion and environmental pollution. With the strong emphasis on green and sustainable development, the diminishing marginal returns of traditional factors of production and rapidly growing industrial added value are entering into increasing tension with the continuously increasing emission and discharge of pollutants [1]. Therefore, it is necessary to find a development path that balances productivity and green sustainability. Traditional total-factor productivity (TFP) is measured based on the influence of capital and labor input on output without taking into account resource input and the impact on the environment [2]. Therefore, traditional total-factor productivity cannot accurately reflect changes in socioeconomic welfare, which are crucial for industrial policymaking, and it is necessary to conduct in-depth research on the main factors driving green total-factor productivity.
The digital economy is a new economic form that relies on information technology to drive economic growth, and the rapid development of information technology is an important element of the digital economy. Digital technology allows business entities to cut down on costs and increase productivity [3][4][5]. Some scholars believe that the digital economy itself is a very special economic form in which transactions for goods and services are completed virtually. Thus, the digital economy develops in line with the development of information and communication technology (ICT) and relies on the rapid development of information technology to penetrate into all aspects of life, change their respective modes of operation, and improve efficiency [6][7][8]. The core elements of the digital economy include ICT and digital-technology development. Digital technology combined with the manufacturing and service sectors can transform the traditional production process by upgrading manufacturing and servicing processes. Digital technology plus manufacturing is the main trend in the future development of the digital economy, and the development of the digital economy will spawn new business models such as the platform economy and the sharing economy with certain green features. The innovative features of the digital economy will accelerate the formation of the platform economy, and digital platforms will facilitate the search for product and service information, reduce the cost of product and service matching, and increase transaction speeds. The major difference between the digital economy and the traditional economy is in the ways in which the two information flows are presented, with the former based on physical methods and the latter mainly occurring through digital flows. The digital economy can be divided into three scopes based on its fields of application: core, narrow scope, and wide scope, which cover various industries in society. A new production factor-data-overcomes the limitations associated with traditional industry such as high pollution, high inputs, and low outputs while optimizing the production process. At the same time, it is believed that the integration of the digital economy and real manufacturing is the future development trend and that the digital economy will give rise to a series of new business models such as live-streaming-influencer businesses and bicycle sharing. These business models have certain green features and can increase urban GTFP to a certain extent. The innovative features of the digital economy will accelerate the formation of the platform economy. Digital platforms enable quick searches for and the matching of product and service information, as well as improvements in transaction efficiency. In addition, they have energy-saving and emission-reduction characteristics, which can increase the development of GTFP [9,10].
The study of the relationship between the digital economy and total-factor productivity is currently a major concern of national policymakers. Some studies have shown that the development of the digital economy is related to improvements in total-factor productivity. From a macro perspective, digital-economy development can improve regional total-factor productivity as based on a quasi-experiment on national big-data comprehensive pilot zones, and digital technology will lead to improvements in total-factor productivity through the spatial spillover effect [11,12]. From a micro perspective, digital transformation can improve the total-factor productivity of enterprises because digital technology can be used to search for product and service information more efficiently, reduce product and service matching costs, and help transactions to be concluded as soon as possible, thereby improving the production efficiency of enterprises [13,14].
However, these studies have not yet delved into the impact of the digital economy on green total-factor productivity. Since the digital economy can play an important and positive role in the green low-carbon and sharing economy, how the digital economy encourages green development in the context of resource depletion and environmental pollution is a key issue worthy of research. At present, studies related to research on the impact of the digital economy on green development are relatively rare, which underlines the necessity of studying the effect of digital technology on green development. The research on the digital economy focuses on the transformation and upgrading of manufacturing by the digital economy, and there has been no in-depth discussion on the innovative characteristics and green value of such a new production factor as the digital economy. In addition, the digital economy is an inclusive concept with various types, and previous studies were mostly centered on R&D and product sales; little attention was paid to the study of the effect of the digital economy on economic efficiency, and less research was conducted related to the study of the effect of the digital economy on the increase in productivity in the real economy. More importantly, the innovative characteristics of the digital economy are by no means simple network behaviors, but their combination with traditional factors of production generates new factors and creates strong innovation drivers to achieve value creation [15][16][17]. There is a lack of academic studies on the relevant influencing factors and mechanisms. In addition, although most studies affirmed the positive impact of the digital economy on total-factor productivity, they failed to fully reflect the efficiency advantages in terms of improving ecology, the environment, and resource conservation, and generally ignored its relationship with the green development of total-factor productivity. In the face of the multiple dilemmas posed by increasing resource constraints and environmental pressures, enhancing green total-factor productivity will be the key to driving green and sustainable development in the future. The fundamental issue that needs to be addressed for green TFP growth is the shift from factor-driven to technology-and innovation-driven growth in order to maximize output in an ecologically and environmentally friendly manner.
Regarding the connection between the digital economy and green productivity, Lyu et al. (2022) and Liu et al. (2022), for example, argued that the digital economy can significantly improve China's GTFP. The higher a city's GTFP, the greater the increase in urban GTFP as a result of the digital economy. Moreover, it was found that the digital economy improved urban GTFP by upgrading industrial infrastructure and alleviating factor-market distortion [18,19]. Indeed, the existing literature offers a beneficial exploration of the connection between the development of digital economy and green productivity, although it mainly discusses the impact of the digital economy on GTFP at the macro level. Enterprises are the main bodies of digital transformation; green productivity requires enterprises to apply green technology, but it also depends on the elimination of polluting enterprises from the market. Therefore, previous research needs to be supplemented by a greater focus on the micro level.
Channel of Influence
As a key production mode based on the use of digital knowledge and information, the digital economy overcomes the limitations imposed by traditional production factors on economic growth, and it empowers the process of innovation and resource allocation in all aspects. As a low-carbon green technology and production mode, the digital economy not only contributes to the improvement of ecology and the environment in economic development, but also makes it possible to increase green total-factor productivity [20][21][22][23]. Moreover, the efficient transmission efficiency of the digital economy makes it easier to share information between regions, while the spatial externalities created by its technology spillover draw the economic activities between different regions closer together, which may have direct or indirect effects on the green total-factor productivity of other regions.
With the vigorous development of the digital economy, modes of production and lifestyles are undergoing rapid changes. Industrial digitization and digital industrialization are increasing, and people's lives are becoming increasingly networked and intelligent. The continuous emergence of new technologies and models increases the updating and iteration of information-industry knowledge and technology, while the life cycle of products or services is gradually shortened. Production and business innovation require information enterprises to obtain new market and technical information in time. Convenient and rapid access to technology spillovers is of great value to enterprises. Digital technology effectively reduces the communication costs between enterprises and between regions, unblocks the channels of knowledge spillovers, builds a tight knowledge-flow network, and enables enterprises to leverage technology spillovers more effectively. Enterprises in a given region can freely share innovative resources through this network; digital technology plays an important role in this network by increasing the spread of green innovative technology and other resources. It can not only facilitate the regional exchange of innovative resources between enterprises, but also increase the inter-regional flow of innovation factors and resources and optimize their allocation in the whole economic system.
Industrial digital transformation has also greatly accelerated the informatization process of enterprises. Through the application of new technologies such as big data, cloud computing, blockchain, and the Internet of Things, traditional manufacturing enterprises can become intelligent manufacturers, thereby improving the technological innovation ability of enterprises and thus improving the total-factor productivity of enterprises. From the perspective of entrepreneurship, the capital-entry threshold of the information-service industry itself is relatively low. As high-speed-rail cities can maintain closer communication and technical contact with central cities and developed areas, professionals with technical skills are more likely to enter high-speed-rail cities to start businesses in the informationservice industry. In particular, this applies to the professional and technical personnel in central cities or developed areas who are interested in starting businesses in areas where the information industry is still underdeveloped.
Competition is the essential feature of the development of the digital economy. Network effects strengthen both monopolies and competition. Evans and Schmalensee (2007) pointed out that through indirect network effects, incumbents-with their first-mover advantage-generate positive-feedback effects with users flocking to a few platforms, thereby forming an oligopoly trend [24]. However, the existence of the positive-feedback effect would not lead to monopolies in most multilateral markets. After analyzing the search-engine market specifically, although the current market is structured as an oligopoly, the main body of competition is still diversified.
Therefore, this paper proposes two ways in which the digital economy could influence urban GTFP: First, digital technology encourages resource exchanges between innovative entities. It breaks the barriers of time and space and broadens the channels and scope of information dissemination. A large amount of information can be rapidly stored and shared by innovative entities through cooperation, which increases efficiency and lowers the cost of sharing and acquiring knowledge. Companies in a region can share innovative resources at will through the network. Digital technology contributes to this by facilitating the transmission of such resources as green innovative technologies. Beyond its values in encouraging the regional exchange of innovative factors resources and resources between companies, it also accelerates their flow across regions and optimizes their allocations within the whole economy.
Second, the development of the digital economy can reduce search, transaction, matching, and replication costs effectively by alleviating information asymmetry, thus lowering transaction barriers, breaking market boundaries, expanding market scope, facilitating the flow of factors in a larger space, and optimizing factor allocation. The development of the digital economy rules out traditional pollution-and energy-intensive industries and improves GTFP. This paper offers the following academic contributions. First, the existing literature focuses mostly on the impact of the digital economy on the traditional industrial-sales model and rarely on production. This paper examines the impact of the digital economy on total-factor productivity in terms of its green-development characteristics, which is a muchneeded addition to the existing literature. Second, in order to more accurately measure totalfactor productivity in China, the traditional total-factor productivity indicators need to be transformed and upgraded to include factors such as environmental pollution and resource consumption. Therefore, this paper used the DEA-SBM model to combine traditional total-factor productivity with relevant non-desired outputs and used the GML productivity index to measure the green total-factor productivity by region.
Econometric Model
According to the theoretical mechanism described in the last section, the following regression model was established to analyze the impact of the digital economy on green total-factor productivity: where i and t represent cities and years, respectively; GTFP it denotes green total-factor productivity; Digit it denotes the level of digital-economy development of City i in Year t; Z it denotes the controlled variables used in this paper such as population, share of secondary industry, share of tertiary industry, share of fixed-asset investment, share of real-estate investment, share of local fiscal expenditure, share of local fiscal revenue, and share of foreign direct investment; µ it and v it denote the city and time fixed effects, respectively; and it denotes the potential random error. While considering possible variable omission and the cause-effect relationship, the following verification will address potential endogeneity problems through an instrumental-variables approach.
Definition and Measurement of the Digital Economy
Currently, there is no international standard for the selection of indicators and measurement methods for the digital economy, and there are no unified measurement indexes of the digital economy as a normative guide. For example, during the 5th IMF Statistical Forum, which was themed "Measuring the Digital Economy", it was mentioned that there was no statistical way to measure the marginal contribution of digital economy to manufacturing products and services. The Digital Economy Competitiveness Index released by the Shanghai Academy of Social Sciences analyzes the development of the digital economy in the world from four aspects through the construction of an international competitiveness model: infrastructure development, industry volume, innovation capacity, and governance evaluation related to digital industries. The Digital Economy Board of Advisors (DEBA) of the U.S. Department of Commerce, the Organization for Economic Cooperation and Development (OECD), the Bureau of Economic Analysis (BEA) of the U.S. Department of Commerce, the European Union (EU), and the China Academy of Information and Communications Technology (CAIC) have all conducted in-depth studies on the measurement methods of the digital economy, but no method has been universally agreed upon; the digital-economy-related measurement method proposed by each international organization has certain limitations on its applicability. This paper draws on the method of Huang Huiqun et al., (2019), which uses indicators concerning the Internet-access rate, related employee profiles, related output profiles, and cell-phone penetration rate; to be specific, these respectively entail the number of users that have access to broadband Internet per 100 people, the ratio of employees in the computer service and software industry to the total employee population in urban areas, the total amount of telecommunication services per capita, and the number of cell-phone users per 100 people [25]. The original data for these indicators can be obtained from the China City Statistical Yearbook. For the development of digital finance, the China Digital Inclusive Finance Index was used; it was jointly compiled by the Peking University Digital Inclusive Financial Index (pku.edu.cn, accessed on 1 October 2022.) and Ant Financial Services Group (Guo Feng et al. 2020) [26]. The comprehensive digital-economy-development index was then obtained by standardizing the data for the five indicators above and then breaking them down through a principal component analysis.
Concept and Measurement of Green Total-Factor Productivity
Green total-factor productivity (GTPF) is measured in a way that incorporates resource and environmental factors into the framework of productivity analysis, which is in line with the concept of green development in the new era. In measuring green total-factor productivity, Chung et al. (1997) first proposed the directional distance function, based on which Malmquist-Luenberger (ML) productivity is measured; this index can measure the pollutant output in the production process and incorporate it into the totalfactor-productivity index system [27]. Regarding the measurement of green total-factor productivity, Chung et al. (1997) were the first to introduce pollution emissions to the totalfactor-productivity measurement framework based on the directional distance function (DDF) and ML index. Tone (2001) made a related improvement by establishing a slacksbased measure of efficiency based on a directional distance function (SBM-DDF), which effectively reduced measurement bias. Yuan et al. (2015), on the other hand, proposed another measurement of green total-factor productivity with a dynamic time-series effect based on the SBM-DDF function [28,29]. In order to explore the influencing factors of green total-factor productivity in depth, the existing literature focuses on the role of environmental regulation, FDI, technological progress, and carbon emissions in green total-factor productivity. These measurements have three shortcomings: first, there are difficulties in incorporating resource consumption and environmental pollution variables into the specific production function; second, they cannot reflect the directionality of non-desired and desired outputs; third, even though the second shortcoming is solved by model changes using the directional-distance function proposed by Chung, there are strict requirements for radiality and angularity, which further restrict its application range [1,[30][31][32].
This paper measured urban total-factor productivity growth through a global data envelopment analysis (DEA) that integrated the super-efficient SBM model while considering the non-expected output and the Malmquist productivity index. The global DEA used the input-output data of all decision makers over the whole period to construct the optimal production frontier and measured all decision makers in different periods within the global optimal production frontier, which effectively solved the problems of infeasible solutions and incomparability across periods.
Inefficiency obtained from the traditional DEA model is subject to the influences of the external environment, random interference, and inefficient management; the traditional DEA cannot overcome their influences. Therefore, a second model similar to the SFA model was established based on the traditional DEA model: where s ik is the slack variable input of item i in the k-th decision-making unit, z k is the external environment, and β i is the index estimated (generally expressed as f i z k ; β i = z k β i ).
The error of this model is the mixed standard error (v ik + µ ik ), which satisfies v ik ∼ N 0, σ 2 vi and u ik ∼ N + 0, σ 2 ui , where v ik and u ik are independent of each other and z k .
ui yields the proportion of technical-inefficiency variance in the total variance. When γ is close to 1, the management factor takes the dominant role; when γ is close to 0, the random error takes the dominant role. Next, the SFA's regression results were used to adjust the inputs by increasing external environment to put DUM in the same context, thus removing the influence of environmental or random factors: where x ik denotes the original input,x ik denotes the adjusted value, β i is the estimated index of the environmental variable, and v ik is the estimation of the random interference. A third-stage DEA analysis with an adjusted input and original output yields the efficiency without the impact of environmental factors and random interference.
Enterprise Green Innovation
In this study, the green innovation of enterprises refers to innovations in green energy, green production, and green products. Green energy refers to the technical innovation of using renewable energy sources such as solar energy and new materials; green production refers to the technical innovation of improving design and production methods, adopting new processes and equipment, improving comprehensive utilization efficiency, and achieving energy savings and emission reductions; and green products refer to technical innovations that do not damage or that reduce damage to ecological environments during or after the use of the products [33,34]. Therefore, we define green innovation as a technological innovation that improves comprehensive utilization efficiency and achieves the purposes of saving energy and reducing emissions by enhancing the process, improving the design, and using alternative renewable energy. The green innovation of enterprises includes both green innovation input and green innovation output. However, since it is difficult to separate the green innovation input from enterprises' R&D input, in this paper the number of green patents applied was used to measure the enterprise's green innovation. The data on corporate green innovation were collected by the authors from the State Intellectual Property Office website; these included the green patent applications of the main industrial listed companies, wholly owned subsidiaries, holding subsidiaries, and joint ventures. The data on environmental taxes were derived from the notes to the financial statements in the annual reports of enterprises. The controlled variables were derived from the China Stock Market and Accounting Research Database (CSMARD). Furthermore, in order to prevent the effect of outliers, all continuous variables were Winsorized by 1% before and after.
Exit of Companies from Markets
This paper identified polluting enterprises via pollution-emission data. During the study, enterprises were cross-compared with the database of Chinese industrial enterprises to identify their existence and status [35]. To be exact, the data on Chinese industrial enterprises and pollution emissions from 1998-2014 were selected for cross-comparison. Firstly, we referred to Brandt (2012) to process the industrial-enterprise database and the pollution-emission database; secondly, we matched with the pollution-emission database according to the enterprise name and year and according to the unified social credit codes and year, merged the matching data in the second and third steps, and removed duplicate data; finally, we determined enterprises that satisfied the matching in the second or third step. By observing about 250 variables in the 16-year period, about 700,000 figures were obtained with an average matching rate of about 17% [36].
Other Variable Indicators
The control variables included the area of the administrative district, population, proportion of secondary industry, proportion of tertiary industry, proportion of investment, proportion of real-estate investment, proportion of foreign direct investment, distance to the nearest port, and number of patents. The data were derived from the statistical yearbook of the corresponding year of each city; in order to ensure the comparability of data between different years, the data were deflated according to the CPI of the current year. In order to prevent the impact of outliers, all continuous variables were shrunk by 1% before and after. Table 1 shows the results of descriptive statistics.
Changes in Green Productivity through Time and in Different Regions
In Table 2, the change in green productivity over the sample period was not significant, but the overall trend was that the national average green productivity increased after 2014 compared to the previous years. This change in green productivity may have been related to the faster development of the digital economy in the country after 2014. In addition, the changes in the standard-deviation indicators of the digital economy in different years were analyzed using the inter-regional digital economy standard-deviation indicators. It was found that the variance in green productivity across regions nationwide was 0.02806 in 2011 and 0.02192 in 2019, which revealed a gradual expansion. This study visualized the spatial distribution of green productivity by region in the country from 2011 to 2019 using ArcGIS. The results are shown in Figure 1, in which the different colors represent different levels of green productivity. The deeper the color, the higher the GTFP value, thereby indicating higher levels of green productivity in the city. Figure 1 shows that most of the Yangtze River Delta and the Pearl River Delta regions had green productivity in the range of 1.00-1.06 (only a very small number of cities are below the left side of the range in a limited number of years), which also showed that the above-mentioned regions had high green productivity levels relative to other areas in China. The western region, on the other hand, faced both ecological and developmental pressures and had a relatively low level of green productivity. However, it is worth noting that in 2019, the green productivity of most cities in Sichuan and Guanzhong Plain, as well as Chongqing, was between 0.99 and 1.05, which indicated that the green productivity of some metropolitan areas in the western region showed an upward trend, thereby implying that green productivity will become an important factor in the coordinated development of China's regional economy. In the provincial administrative regions, before 2015 there were still some provincial capitals with relatively low green productivity; however, after 2015 the green productivity of most provincial capitals was higher than 1.0, which indicated an obvious divergence in economic efficiency within the administrative regions. The subsequent analysis showed that this may have some correlation with the level of development of the regional digital economy.
Spatial Distribution of the Principal Components of the Digital Economy
This subsection investigates the spatial and temporal patterns of digital-economy development levels. These were based on the total digital-economy component index of each city in the country in this study, which were used to visualize the spatial distribution of the digital-economy development levels of each region in the country from 2011 to 2019 using ArcGIS tools (see Figure 2). In Figure 2, the different colors represent different levels of digital-economy development. The deeper the color, the larger the value of the digital economy's principal component, indicating a higher level of digital-economy development in the city. Figure 2 shows that in 2011, the overall development level of China's digital economy was relatively low, with most regions having a digital economy principal component below 90,000 and only a few cities such as Beijing and Shanghai over 90,000. In 2019, although most cities still had a digital-economy principal component index below 90,000, there was a significant increase in the number of cities over 90,000 compared to 2011; this increase was concentrated in the provincial capitals of each province. The reason for this phenomenon may be that China's digital economy has taken a leading role in the development of large cities such as provincial capitals. This also would explain the changes in green productivity in Chinese cities shown in Figure 2
Spatial Autocorrelation Analysis
To determine whether there was a spatial correlation in terms of the green productivity of China, a spatial-weights matrix based on geographic adjacency was built to calculate the Moran's I of GTFP from 2011 to 2018. The results are shown in Table 3. Generally, the Moran's I did not show a linear trend and was not significant under 10%. It can be assumed that this does not contradict H0 (the data were randomly distributed). This proved that the observed spatial model could be random. From 2011 to 2018, the variance in Moran's I was as small as 0.0012. This deviation was relatively small. These results showed that urban green productivity was not affected by neighboring cities and did not show notable spatial clusters. To determine whether there was a correlation in terms of the growth of the digital economy, a spatial-weights matrix based on geographic adjacency was built to calculate the Moran's I of the main components of the digital economy from 2011 to 2018. The results are shown in Table 4. The Moran's I was significant under 1%, thus refuting H0 (the data were randomly distributed). This proved that the observed spatial model was unlikely to be random. In addition, from 2011 to 2018, the variance in Moran's I was as small as 0.0025. This deviation was rather small. These results showed that the degree of the digital economy was not randomly dispersed. On the contrary, the degree of the digital economy of a city was affected by neighboring regions. The Moran's I was negative, meaning that the degree of the digital economy was not clustered but instead dispersed. The Moran's I rose from −0.0590 in 2011 to 0.0359 in 2018, which demonstrated that the degree of the digital economy became less dispersed over time. These results corresponded to the facts. The degree of the digital economy in China was sporadic, unlike environmental pollution, which spread out from the center.
Model Testing
The above results showed that there was a significant correlation between the digital economy and the regional green TFP. To further investigate the impact of the degree of digital-economy development on green total-factor productivity and the corresponding impact mechanism, this paper used the ordinary least squares (OLS) model. In this study, the green-productivity rates of 275 cities in China were used as the explained variables and the relevant indicators, which included the principal component of the digital economy, area of the administrative district, population, proportion of secondary industry, proportion of tertiary industry, proportion of investment, proportion of real-estate investment, proportion of local fiscal expenditure, proportion of local fiscal revenue, proportion of foreign direct investment, distance to the nearest port, and number of patents, were used as the explanatory variables. A regression analysis was applied using Stata 15.0 to explore the factors that may have affected green total-factor productivity. In order to prevent an estimation bias caused by the interaction of the indicators, a multicollinearity test was conducted on the above indicators. The results are shown in Table 5. The variance inflation factor (VIF) of each indicator was less than 10. Therefore, there was no multicollinearity relationship between the selected indicators.
Empirical Results
The estimation results of the regression model are given in Table 6 below. In the regression results in column (1), the coefficient of the core explanatory variable-the degree of digital-economy development-is positive and significant with a coefficient of 0.0041. Columns (2)-(6) add the control variables of the area of the administrative district, population, proportion of secondary industry, proportion of tertiary industry, proportion of investment, proportion of real-estate investment, proportion of local fiscal expenditure, proportion of local fiscal revenue, proportion of foreign direct investment, distance to the nearest port, and number of patents, respectively. As the controlled variables increased gradually, the coefficient of the digital-economy index remained positive and passed a 1% significance. Furthermore, the regression results were robust, which indicated that the digital economy increased the green total-factor productivity of cities. Therefore, H1 was verified. In the coefficient estimation results in column (6), we can see that for every 1% increase in the level of digital-economy development, the regional green total-factor productivity increased by 0.083%. Note: t statistics in parentheses; * p < 0.1, ** p < 0.05, *** p < 0.01.
Instrument Estimate
The selection of appropriate instrumental variables as the core explanatory variables is the main approach to addressing the endogeneity problem . In this paper, the Internet-penetration rate of each prefecture-level city in 2001 was used as an instrumental variable for the development level of the digital economy in each region. On one hand, the development of the digital economy relies on the popularity of the Internet; regions with high penetration rates of Internet technology can nurture mature digital economies. The local historical Internet-penetration rate influences the development of the digital economy in subsequent stages through factors such as the technology level and usage habits. On the other hand, the digital economy is stripped down to the Internet industry, which is a higher-level industrial business that satisfies exclusivity. It should be noted that the original data of the selected instrumental variables were in cross-sectional form and could not be directly used in the econometric analysis of panel data. Referring to the solution by Nunn and Qian (2014) regarding this issue, a time-varying variable was introduced to construct the panel instrumental variable. Specifically, the number of national Internet users in the previous year was constructed as an interaction with the Internet-penetration rate of each city in 2001, respectively, as an instrumental variable for the city's digital-economy index in that year. Table 7 demonstrates the regression results of the instrumental variables. The coefficients of the degree of digital economy in columns (1) to (2) are all positive and significant at 1%, which showed that the effect of the digital economy on green-productivity enhancement still held after taking into account the endogeneity problem. In addition, to test the original hypothesis of "insufficient identification of instrumental variables", the p-value of the LM statistic of the Kleibergen-Paap rk was 0.000, which meant that in the testing of the weak identification of the instrumental variables, the Wald F-statistic of Kleibergen-Paap rk was larger than the critical value of the Stock-Yogo weak identification test at 10%. This significantly rejected the original hypothesis. Therefore, there was no problem concerning weak instrumental variables. Therefore, the selection of the historical Internetpenetration rate of each city and the number of national Internet users in the same year as the instrumental variable for the level of digital-economy development was reasonable.
Regional Heterogeneity
The degree of economic development varies significantly in different regions of China, as does the degree of digital-economy development. The eastern region is relatively developed and therefore enjoys better Internet infrastructure and faster progress. Moreover, due to the differences in economic development across regions, the impact of the digital economy on green productivity also varies. In this section, cities in the eastern, central, western, and northeastern regions were compared and contrasted in terms of the impact of the digital economy on green productivity as shown in Table 8. The coefficients of the digitaleconomy degree in all the columns in Table 8 are notably positive, which proved that the digital economy increased regional green productivity. Column (1) shows that the degree of digital economy in the eastern region significantly improved the green productivity of this region due to its advanced digital-communication technology and infrastructure. The same presumably held for the northeastern region, for which the absolute value of the estimated coefficient was slightly larger than that of the eastern region, thereby showing a stronger influence of the digital economy on green productivity. The estimations for the central and western regions were similar; the absolute values of their estimated coefficients were smaller than that of the eastern region. The possible reasons for this were the later start to the building of Internet infrastructure, poor urbanization progress, and insufficient utilization of Internet infrastructure. Therefore, the positive effect of the digital economy on green productivity in the central and western regions was generally smaller than that in the eastern and northeastern regions.
Heterogeneity in Terms of Urban Scale
Compared with small cities, large cities boast the economic externality of urban agglomeration characterized by matching, sharing, and learning, which encourages resource sharing and enhances spillover effects. The advantages of cities in terms of expert knowledge and diversity, manpower, and information networks encourage technological innovation and application, which attracts digital industries. In addition, for a digital startup, in order to work more effectively as a medium for transactions, it is crucial to build a user base and improve services. This requires an Internet company to take active measures to consolidate and expand its user group and to form a mutually beneficial mechanism with other market players, thus realizing the goals of acquiring users and rapidly increasing installation. Therefore, the degree of digital economy varies in cities with different populations. This section aims to determine the effect of the digital economy on the green productivity of cities with different populations.
In this section of the study, cities were categorized as small, medium, and large according to the population benchmarks of 1 million and 5 million in order to study the effect of the digital economy on the green productivity of each category. Table 9 shows the regression results of each category. In Column (1), the regression coefficient of the digital economy is not significant, showing that green productivity was not sensitive to the degree of the digital economy. This is probably because the small cities had small populations that were not conducive to the growth of digital economies, or because the multiplier effect of the digital effect was not strong enough to produce a notable effect on green productivity. In contrast, the coefficients of the digital economies of the large and medium cities were significantly positive, which indicated that in cities with a certain population, the degree of the digital economy produced a notable effect on the increase in green productivity. The estimated coefficients in Columns (2) and (3) show that the absolute values of the coefficients of the large and medium cities were close and that when the urban population reached a certain benchmark, the elasticity of the effect of the digital economy on green productivity essentially remained the same.
Digital Economy and Spatial Distribution of Green Invention Patents
This section focuses on the spatial and temporal patterns of green invention patents. This study visualized the spatial distribution of green invention patents in each region of the country from 2011 to 2019 using ArcGIS based on the green patent invention indicators in each city of the country; specifically, the total number of green-invention patent applications and the total number of green-utility patent applications (see Figure 3). In Figure 3, the different colors represent different numbers of green patent applications. The deeper the color, the greater the total number of green patents applied for in that year in that city, thereby indicating a higher level of green innovation in the city. Figure 3 shows that from 2011 to 2019, the number of green-invention patent applications in China was relatively small, and the number of green-invention patent applications in a considerable number of cities was 0. The distribution of cities with useful green-invention patents was consistent with their level of digital economic development; it is not difficult to find that the cities with a higher green productivity were generally also the cities with higher numbers of green-invention-patent applications, which implied an inherent connection between the level of digital-economy development, green-invention patents, and green productivity. This section of the study tested whether the impact of the digital economy as described in the previous section included an increase in the number of green patents held by companies. Using the database of listed companies from 2012 to 2019, this study examined the mechanism of the impact of the digital economy on local green productivity through a regression approach by using the data related to green-invention patents in the database. Table 10 reports the results of the analysis. In the regression results in column (1), the coefficient of the degree of digital-economy development is significantly positive. This indicated that digital-economy development helped to enhance the number of green patents of enterprises. Column (2) adds firm-level control variables for which the coefficient estimates and significance of the core explanatory-variable digital-economic development level in the regression results remained consistent. This indicated that the conclusions of this paper still held when the influencing factors at the enterprise level were controlled. When combining the regression coefficients in column (2), it is easy to see that the number of green-invention patents of enterprises increased by 128.9 for every 1% increase in the degree of digitaleconomy development. Columns (3)-(4) replace the explained variables with enterprises' uses of green patents; the coefficient of the level of digital-economy development was still significantly positive, again indicating that digital-economy development helped to increase the number of green patents from enterprises. In terms of the coefficient value, the number of utility patents was comparatively small, which indicated that the number of green-utility patents needs to be increased.
Digital Economy and Exit of Polluting Enterprises from Markets
This section focuses on the spatial pattern of the exit from the market of polluting enterprises. Based on the results after matching the industrial-enterprise database and the pollution-emission database, this study presents the data concerning the market exit of polluting industrial enterprises in 2011 according to the accounting standards of enterprise survival. Furthermore, the spatial distribution of polluting industrial enterprises in each region of the country in 2011 was visualized by using ArcGIS (see Figure 4). In Figure 4a, the different colors represent different numbers of polluting enterprises. The deeper the color, the greater the number of emitters that exited the market in the city in that year, thereby indicating that the city eliminated more polluting enterprises. Combined with the results in Figure 1, it is easy to see that cities with a higher number of polluting enterprises exiting the market were generally also cities with higher green productivity; this conclusion still held when the indicator was replaced with the proportion of polluting enterprises exiting the market, which implied that there was a certain intrinsic connection between the level of digital economic development, polluting enterprises exiting the market, and green productivity. Next, the question of whether the level of digital-economy development could accelerate the exit of polluting enterprises from the market was tested. In this paper, we used the industrial-enterprise database and enterprise-pollution data to match and determine the number of enterprises entering and exiting based on their operation. Since the only years in which the industrial-enterprise-database data and the years studied in this paper overlapped were 2011 and 2012, and considering that it was not possible to determine whether the enterprises exited the market in 2012, the industrial-enterprise data from 2011 were selected as the research sample for this section. Table 11 reports the results of the analysis. In the regression results in column (1) of Table 11, the coefficient of the level of digital-economy development is significantly positive. This indicated that the number of polluting enterprises exiting the market was higher in regions with a higher level of digital-economy development. The coefficient of the level of digital-economy development is significantly positive for column (2) in Table 7 when replacing the explained variable with the probability of enterprises exiting the market. This indicated that the development of the digital economy accelerated the exit of polluting enterprises. When combining the regression coefficients in column (2) of Table 8, it is easy to see that for every 1% increase in the level of digital-economy development, the probability of polluting enterprises exiting the market increased by 7.69%. This indicated that the development of the digital economy enhanced the green productivity of the region by encouraging the exit of polluting enterprises from the market.
Conclusions
Based on the fact that the digital economy has greatly influenced socioeconomic development, the green total-factor productivity and decomposition values of China from 2011 to 2019 were measured from the perspective of green development with the help of the DEA-GML index. In order to determine the development degree of the digital economy, the impact of the digital economy on green total-factor productivity and its mechanism was empirically tested in multiple dimensions using empirical regression. This study showed that the degree of the digital economy was conducive to the development of urban GTFP and that the urban GTFP was enhanced through two mechanisms: enhancing the production technology of enterprises and phasing pollution-intensive enterprises out of the market.
First, the role of the digital economy in increasing urban green development should be enhanced. Therefore, it is necessary to increase the government's focus on the digital economy. The government should fully leverage the digital economy to increase the guidance of regional green development. It should cultivate diversified investment bodies; increase investment in all aspects of the digital economy; improve the construction of information and communication infrastructure; facilitate digital-technology research and development; encourage the popularization of practical applications such as artificial intelligence, big data, and the Internet of Things; transform high-energy-consuming and crude-production methods using modern networked and intelligent platforms; improve the allocation of factors in order to increase their combined efficiency; and continuously harness the positive effects of the digital economy on high-quality development.
Second, it is important to strengthen the penetration capacity of the digital economy and accelerate the deployment of the digital economy. Only after the digital economy exerts its scale effect can the urban GTFP be significantly enhanced. Therefore, it is necessary to deepen the integration of the digital economy and traditional industries, help traditional industries digitize and intellectualize, enrich digital knowledge and real digital-application scenarios, and increase the urban GTFP.
Third, it is important to encourage the advanced industrial structure and green development of cities. Research shows that the development of urban GTFP can be increased only when the industrial structure is advanced. The government should prioritize the development of advanced manufacturing industries, actively participate in the division of labor in the advanced global value chain, encourage green technological innovation, and gradually eliminate highly polluting enterprises with low capacity and high energy consumption while also focusing on supporting enterprises with key technologies in hand and strong demonstration effects as well as connecting upstream and downstream supply chains to drive the development of urban GTFP.
A fourth aim should be to strengthen the coordination and cooperation between the digital economy and various factors such as market-oriented reforms, talent training, and institutional governance to develop with greater synergy. It is important to actively integrate traditional industries; integrate digital technology innovation into all aspects of development, production, and application; create technological breakthroughs; and shift green total-factor productivity from a paradigm led by efficiency improvement to one led by technological progress in order to achieve the goal of the green development of the economy.
Author Contributions: C.Z., Z.L. and X.Y. each contributed extensively to the work presented in this paper. Conceptualization, C.Z.; methodology and validation, Z.L.; data curation, X.Y.; writing-original draft preparation, C.Z. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Informed Consent Statement: Not applicable.
Data Availability Statement: Due to the confidentiality and privacy of the data, they will only be provided upon reasonable request.
|
v3-fos-license
|
2018-04-03T06:03:40.322Z
|
2016-11-09T00:00:00.000
|
16107194
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://academic.oup.com/conphys/article-pdf/4/1/cow052/9728286/cow052.pdf",
"pdf_hash": "675a3ffa8839e3c871fec0f7aecd9746eee81bb9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:724",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "81e785dd5ce90af08a364b4c55c0a3b15c33e4f1",
"year": 2016
}
|
pes2o/s2orc
|
Effect of elevated carbon dioxide on shoal familiarity and metabolism in a coral reef fish
Atmospheric CO2 is expected to more than double by the end of the century. The resulting changes in ocean chemistry will affect the behaviour, sensory systems and physiology of a range of fish species. Although a number of past studies have examined effects of CO2 in gregarious fishes, most have assessed individuals in social isolation, which can alter individual behaviour and metabolism in social species. Within social groups, a learned familiarity can develop following a prolonged period of interaction between individuals, with fishes preferentially associating with familiar conspecifics because of benefits such as improved social learning and greater foraging opportunities. However, social recognition occurs through detection of shoal-mate cues; hence, it may be disrupted by near-future CO2 conditions. In the present study, we examined the influence of elevated CO2 on shoal familiarity and the metabolic benefits of group living in the gregarious damselfish species the blue-green puller (Chromis viridis). Shoals were acclimated to one of three nominal CO2 treatments: control (450 µatm), mid-CO2 (750 µatm) or high-CO2 (1000 µatm). After a 4–7 day acclimation period, familiarity was examined using a choice test, in which individuals were given the choice to associate with familiar shoal-mates or unfamiliar conspecifics. In control conditions, individuals preferentially associated with familiar shoal-mates. However, this association was lost in both elevated-CO2 treatments. Elevated CO2 did not impact the calming effect of shoaling on metabolism, as measured using an intermittent-flow respirometry methodology for social species following a 17–20 day acclimation period to CO2 treatment. In all CO2 treatments, individuals exhibited a significantly lower metabolic rate when measured in a shoal vs. alone, highlighting the complexity of shoal dynamics and the processes that influence the benefits of shoaling.
Introduction
Atmospheric CO 2 has risen to >400 ppm (Dlugokencky and Tans, 2016) because of human activity, higher than any time in the last 800 000 years (Masson-Delmotte et al., 2013). The partial pressure of CO 2 (pCO 2 ) in the world's oceans is rising at approximately the same rate as in the atmosphere (Doney et al., 2009;Le Quéré et al., 2013). If current anthropogenic CO 2 emissions continue unabated, average CO 2 levels in the atmosphere and surface ocean will more than double from present-day levels by the year 2100 (Fabry et al., 2008;Collins et al., 2013). Furthermore, new models indicate that seasonal cycles in ocean pCO 2 1 amplified in the future, meaning that marine organisms will experience extended periods of ocean pCO 2 in excess of 1000 µatm by the end of this century (McNeil and Sasse, 2016). Rising CO 2 levels are predicted to affect a range of behavioural (Briffa et al., 2012;Nagelkerken and Munday, 2016) and physiological processes (Pörtner et al., 2004;Heuer and Grosell, 2014) in marine organisms, with potentially far-reaching effects on marine ecosystems (Wittmann and Pörtner, 2013).
Higher environmental CO 2 levels can be a problem for marine organisms because they act to acidify the blood and tissues and thus affect pH-dependent physiological processes (Pörtner et al., 2004). Fish defend against acidosis in a high-CO 2 environment by actively regulating acid-base-relevant ions in their blood and tissues (Heuer and Grosell, 2014). Consequently, they are able to maintain a pH suitable for cellular processes, even at very high ambient CO 2 levels (Ishimatsu et al., 2008;Esbaugh et al., 2012Esbaugh et al., , 2016. However, this acid-base regulation leads to changes in extracellular ion concentrations that may interfere with the function of neurotransmitter receptors (Nilsson et al., 2012). These neurological changes can lead to altered behaviour and impaired sensory systems. Behavioural effects of exposure to high CO 2 include reduced learning ability Chivers et al., 2014), altered activity levels Ferrari et al., 2011a), higher anxiety (Hamilton et al., 2014), disrupted behavioural lateralization (Domenici et al., 2011; and reduced predator avoidance behaviour Munday et al., 2010;Ferrari et al., 2011b). Behavioural responses to visual (Ferrari et al., 2012b;Chung et al., 2014), olfactory (Munday et al., 2009b) and auditory cues (Simpson et al., 2011;Rossi et al., 2016) are all affected, although one study found that visual cues were less affected than olfactory preferences at projected near-future CO 2 levels (Lönnstedt et al., 2013). Some behavioural traits appear to be unaffected by elevated CO 2 , particularly foraging behaviour and swimming kinematics (Munday et al., 2009c;Nowicki et al., 2012;Maneja et al., 2015). In addition, some species, such as the Atlantic cod (Gadus morhua), exhibit tolerance to elevated CO 2 in terms of behavioural effects Hedgärde, 2013, 2015). Even among closely related coral reef fish, there is substantial variability among species in the degree of behavioural effects in response to elevated CO 2 (Ferrari et al., 2011a).
The effects of elevated pCO 2 and decreased pH on other physiological characteristics are unclear. Theoretically, the energetic cost of increased regulatory mechanisms (such as acid-base balance regulation) should manifest in higher overall energetic needs (Ishimatsu et al., 2008). However, studies measuring standard metabolic rate (SMR; the metabolic rate of a resting, fasting and non-stressed individual; a measure of basic energetic needs) of fishes under elevated pCO 2 have found highly variable results (reviewed by Heuer and Grosell, 2014;Lefevre, 2016), reporting increases (Munday et al., 2009a;Enzor et al., 2013), decreases and no effects of pCO 2 on SMR (Deigweiher et al., 2008;Melzner et al., 2009;Strobel et al., 2012;Couturier et al., 2013), suggesting that the effects may be species or context specific. However, another important consideration is that, although many studies have examined the effect of pCO 2 on the metabolic rate of gregarious fish species (Munday et al., 2009a;Miller et al., 2012;Rummer et al., 2013), all have measured metabolic rate in solitary individuals, which can have effects on the measured metabolic rate because of the stress of isolation (Nadler et al., 2016). Therefore, how social context may modulate the effect of pCO 2 on metabolic traits, such as SMR, remains unknown. Recent work found that the immediate social environment can have a significant impact on metabolic rate, with individuals tested in the presence of shoal-mate cues exhibiting a significantly lower minimal measured metabolic rate than individuals tested in social isolation (Nadler et al., 2016). One factor that is likely to contribute to this calming effect is a reduced need for individual vigilance, because animal groups exhibit improved threat detection by having 'many eyes' to scan for predators (Roberts, 1996;Ward et al., 2011). Individuals accustomed to a social environment may also exhibit reduced stress when allowed to associate with conspecifics (Hennessy et al., 2009). The importance of these benefits could increase in the presence of environmental stressors, such as rising pCO 2 , because having a reduced metabolic rate in shoaling conditions could aid in coping with the projected rise in energy demand associated with changing environmental conditions. Group living is widespread among fish species and carries benefits for individuals with respect to predator avoidance, foraging opportunities and energy use (Shaw, 1978;Krause and Ruxton, 2002). A learned familiarity can be attained following a prolonged period of interaction between social individuals (reviewed by Ward and Hart, 2003), increasing the probability of reciprocal cooperation between members of an animal group (Granroth-Wilding and Magurran, 2013). This greater cooperation can have benefits for a range of fitness-enhancing processes and characteristics, including foraging, social learning, body condition and survival (Seppä et al., 2001;Swaney et al., 2001;Atton et al., 2014). As a result, fish prefer to shoal with familiar conspecifics (e.g. Magurran et al., 1994;Griffiths and Magurran, 1997;Bhat and Magurran, 2006;Edenbrow and Croft, 2012), with individual identification achieved primarily through olfactory stimuli (Partridge and Pitcher, 1980;Brown and Smith, 1994;Ward et al., 2002). As elevated pCO 2 is known to impact behavioural traits and sensory abilities necessary for social recognition, the ability to recognize familiar shoal-mates may be compromised in future environmental conditions. Elevated pCO 2 may affect the calming effect and the ability of fish to recognize conspecifics owing to its effects on fish behaviour, sensory abilities or physiology. In the present study, we examined the effect of elevated pCO 2 on familiarity and the calming effect in the blue-green puller, Chromis viridis (Cuvier, 1830), a common species of shoaling damselfish. Shoals were acclimated to one of the following three CO 2 treatments: control (450 µatm), mid-CO 2 (750 µatm) or high-CO 2 (1000 µatm). Our first aim was to determine whether elevated pCO 2 modulated familiarity, using a choice test in which individuals were given the choice to associate with familiar shoal-mates or unfamiliar conspecifics. Our second aim was to explore whether the calming effect was altered by environmental pCO 2 , using an intermittent-flow respirometry methodology for social species. We hypothesized that familiarity would be disrupted by elevated pCO 2 . Given the known benefits of familiarity to shoaling fish (Seppä et al., 2001;Swaney et al., 2001;Atton et al., 2014), we also predicted that the calming effect on the minimal measured metabolic rate would be reduced if familiarity was disrupted at elevated pCO 2 .
Fish collection and maintenance
Experiments were conducted at the Lizard Island Research Station in the northern Great Barrier Reef (14°40′08″S; 145°27′ 34″E). Shoals of C. viridis (standard length, 3.22 ± 0.03 cm; body mass, 1.29 ± 0.04 g; mean values ± SEM) were collected from reefs in the lagoon adjacent to the Lizard Island Research Station using hand nets and barrier nets. Chromis viridis is an abundant, live coral-associated shoaling species found on coral reefs throughout the Indo-Pacific region in groups ranging in size from a few to hundreds of individuals (Randall et al., 1997). Fish were placed into groups composed of eight individuals and housed in replicate 30 litre aquaria in a flow-through seawater system. All experimental shoals were held together for a minimum of 15 days to ensure that they exhibited a uniform degree of familiarity . Fish were fed to satiation twice daily with INVE Aquaculture pellets and newly hatched Artemia sp.
Carbon dioxide treatments and administration
Shoals were acclimated to one of the following three CO 2 treatments: 450 µatm (ambient control), 750 µatm or 1000 µatm (4-7 days for behaviour experiments and 17-20 days for physiology experiments; seawater chemistry summarized in Table 1). These elevated-CO 2 treatments were chosen based on the range of CO 2 levels projected for the year 2100 (Collins et al., 2013;McNeil and Sasse, 2016). The CO 2 administration methodologies followed standard procedures for ocean acidification research (Gattuso et al., 2010). The only deviation from this prescribed methodology was the use of single header tanks for each CO 2 treatment (Cornwall and Hurd, 2015), as space limitations in the field prevented us from having multiple header tanks for each CO 2 treatment. Seawater was pumped directly from the ocean into each 60 litre header tank. Elevated-CO 2 seawater treatments were achieved by dosing CO 2 to a set pH, using a pump placed into each header tank through which CO 2 was diffused. This pump aided in rapid dissolution of CO 2 and vigorous stirring of water in the header tank. A pH controller (Aqua Medic, Germany) attached to each CO 2 treatment header tank maintained pH at the desired level. In control header tanks, air was diffused through sump pumps. Equilibrated seawater was then pumped at a rate of~700 ml/ min to each of the replicate 30 litre experimental tanks. For each of these replicate tanks, seawater pH NBS (pH measured on the NBS scale; Mettler Toledo SevenGo Pro) and temperature (Comark C22) were recorded daily. Seawater CO 2 was confirmed with in situ CO 2 measurements, using a portable CO 2 equilibrator and non-dispersive infrared (NDIR) sensor (Vaisala GMP343; Hari et al., 2008;Munday et al., 2014b). For experiment 1, in situ CO 2 measurements were conducted once weekly in the control and 1000 µatm treatments to confirm CO 2 levels based on pH measurements. During experiment 2, these measurements were conducted on each treatment at least three times weekly, during which CO 2 measures were recorded. These measurements are detailed in Table 1 and confirm our calculated pCO 2 . Salinity was measured by an automated float in the Lizard Island lagoon (Bainbridge, 2015). Water samples were taken twice weekly and analysed for total alkalinity by Gran titration (888 Titrando, Metrohm, Switzerland) to within 1% of certified reference material (Professor A. Dickson, Scripps Institution of Oceanography). Average pCO 2 was calculated with the program CO2SYS, from measured pH NBS , temperature, The estimated partial pressure of CO 2 (Estimated pCO 2 ) was calculated in the program CO2SYS using the other measured parameters. In situ pCO 2 was measured using a portable CO 2 equilibrator with non-dispersive infrared (NDIR) sensor. Seawater pH was measured on the NBS (National Bureau of Standards) scale (pHNBS). Error is SEM. salinity and total alkalinity, using constants from Mehrbach et al. (1973) refitted by Dickson and Millero (1987) and Dickson (1990) for KHSO 4 .
Experiment 1: effect of elevated CO 2 on familiarity
Nine experimental C. viridis shoals, each composed of eight fish, were acclimated to each CO 2 treatment for a period of 4-7 days before experimentation. This time period is sufficient for elevated CO 2 to induce behavioural changes in reef fishes, and previous studies indicate that longer acclimation periods do not change results (Munday et al., 2013a(Munday et al., , 2014aWelch et al., 2014). Two individuals per group were chosen randomly for testing for shoal association preferences (n = 18 individuals per treatment). These individuals were distinguished from each other and their shoal-mates using unique visible implant elastomer (VIE) tags (Hoey and McCormick, 2006).
The VIE tags were administered 24-48 h before placement in the CO 2 treatment. Shoaling preference was established using a choice test, using methodology adapted from Griffiths and Magurran (1997). An elongate testing tank ( Fig. 1a) was filled to a depth of 20 cm with seawater at the same CO 2 level as the relevant treatment. Two 1 litre plastic containers (height, 24 cm × diameter, 10 cm) were placed at each end of the tank, 6 cm from the side-wall. The plastic containers were transparent and made porous to olfactory cues by holes drilled around the circumference (50 5 mm holes per container). Shoals composed of 7 fish of either the familiar or an unfamiliar group were placed in these bottles. The location of the familiar shoal (right or left bottle) was randomized. The shoal used as unfamiliar was also randomized, to ensure that each shoal within a treatment was used as the unfamiliar shoal a uniform number of times and that a different unfamiliar shoal was used when testing each of the two focal fish from a shoal. The focal fish was placed in a clear, porous container in the centre of the tank. This container . The dark ovals on either end of the tank represent the shoal holding containers, and the dark oval in the centre of the tank illustrates the container used for the focal fish during the pre-trial acclimation period. White dots represent the porosity of the containers (each container contained 50 5 mm holes). (b) Side view of the respirometry chamber. The experimental set-up was composed of an inner respirometry chamber (length, 13.5 cm; inner diameter, 3.24 cm; volume of chamber and associated gas-impermeable tubing, 100 ml) and an outer shoal-mate holding chamber (length, 12.0 cm; inner diameter, 11.4 cm; volume of chamber, 1.10 litres). Arrows indicate the direction of water flow through tubing. Each X indicates a water pump used for mixing the inner chamber and flushing both chambers. The outer shoalmate holding chamber was flushed with its own pump. The outflow port for this outer chamber was connected to the flush pump for the inner respirometry chamber, to provide olfactory cues of shoal-mates to the focal individual. In order to ensure proper mixing in the inner respirometry chamber, a pump ran continuously in a closed loop. Deoxygenated water in the inner chamber was discarded during on phases of the flush pump. All focal individuals were tested in both an alone-testing treatment and a shoal-testing treatment (with six shoal-mates).
4
sat over a small coral shelter, and the bottom 3 cm of the container was opaque to allow the fish to take shelter. All fish were left to acclimate in this container for 15 min, which was a sufficient time period for all fish to calm down after handling. The container surrounding the focal fish was then lifted using a pulley system so that the focal fish would not be disturbed by visual cues of the observer. Trials lasted 15 min and were video-recorded (Canon Powershot D10). Pilot trials were conducted with food colouring to estimate the degree of olfactory cue mixing throughout the choice test tank during the 30 min trial (including both the 15 min acclimation period and the 15 min testing period). While there was olfactory mixing in the neutral zone of the experimental tank (Fig. 1a), no mixing occurred in the shoal association zones within this time frame.
Using QuickTime Player 7 (v 7.6.6), videos were analysed for the following factors: (i) the proportion of time spent shoaling with each group; (ii) initial shoal choice following removal of the barrier; and (iii) total shoal visits (a proxy for activity, which indicates the number of times that the focal fish traversed the experimental tank). Individuals were said to be shoaling when they were swimming within two body lengths of the shoal (Pitcher and Parrish, 1993). To ensure that focal fish were making an informed choice (e.g. had experienced the sensory cues of both stimulus shoals), they had to visit both shoal preference zones within a trial or they were retested the next day (occurred with 22% of focal fish across CO 2 treatment groups). Different unfamiliar shoals were used when retesting to prevent learning of unfamiliar conspecifics. Activity was recorded so that we could confirm that any effect of CO 2 on shoal association preferences was not attributable to changes in activity levels between treatments.
Experiment 2: effect of elevated CO 2 on the calming effect Ten experimental shoals were acclimated to each CO 2 treatment for a period of 17-20 days. This longer acclimation period was used for this experiment because studies show that metabolism requires a longer period of time to adjust to elevated CO 2 treatments (Enzor et al., 2013). One individual per group was chosen randomly for testing (n = 10 individuals per treatment) and was identified using VIE tags (Hoey and McCormick, 2006). The VIE tags were administered 24-48 h before placement in the CO 2 treatment.
The calming effect was measured using a previously described intermittent-flow respirometry methodology for social species (Nadler et al., 2016;Fig. 1b). Respirometry is a technique in which oxygen uptake rates are measured as a proxy for aerobic metabolism (Steffensen, 1989;Nelson, 2016). Each respirometry chamber was composed of two cylindrical glass tubes: an inner tube (length, 13.5 cm; inner diameter, 3.24 cm; total volume of chamber and associated gas-impermeable tubing, 100 ml) and an outer tube (length, 12.0 cm; inner diameter, 11.4 cm; total volume of chamber minus volume occupied by inner chamber, 1.10 litres). The outer chamber was affixed to the exterior of the inner chamber and was used to provide visual and olfactory cues of shoal-mates to the focal individual. This larger chamber was aerated with a continuously running flush pump. To provide olfactory cues of shoal-mates to the focal individual, the water leaving the outflow port was attached to the inflow vent for the inner chamber's flush pump. The inner chamber was connected to a recirculating pump (to mix water in the respirometer) and a flushing pump that flushed the chamber with oxygen-saturated water for 3 min between each 9 min measurement period. The water used to flush the chamber between measurement periods was maintained at the same pH and pCO 2 as the focal fishes' treatment. Chambers were immersed in separate, temperature-controlled water baths (29 ± 0.5°C). Temperature was maintained through a combination of air conditioning and controlling ambient water flow to the water bath. The metabolic rate of each focal fish was recorded in an alone-testing treatment (no shoal-mates in the outer chamber) and a group-testing treatment (six shoal-mates in the outer chamber). The order of testing trials (testing of the alone or group treatment first) was randomized. All focal fish were given 48 h between testing trials.
Dissolved oxygen concentration in the inner, focal chamber was measured every 2 s and logged using a Fire-Sting fibreoptic oxygen meter (Pyroscience, Germany), connected to a computer. The oxygen-sensing optode was mounted in the recirculation loop in a flow-through cell, to ensure that flow was sufficient for a fast response time of the sensor (Svendsen et al., 2016). Focal fish were fasted for 24-26 h before experimentation to ensure that they were in a post-absorptive state and were left undisturbed in the respirometers for 17-19 h overnight, as C. viridis is quiescent at night. A dim light remained on through the night in the laboratory to simulate moonlight, allowing the focal fish to see their shoal-mates in group testing trials. Activity was recorded during daylight hours using a webcam (H264 Webcam software) and was measured by counting the number of 180°turns for 10 min/h of testing (from which turns/min was calculated). Activity was recorded to ensure that any measured effects of CO 2 on oxygen uptake were not attributable to changes in activity between CO 2 treatments. Slopes (s) were calculated from plots of oxygen concentration vs. time using linear least-squares regression (LabChart v6) and converted to the rate of oxygen uptake (Ṁ O2 ; in milligrams of O 2 per hour). For all trials, background respiration was measured in empty chambers for three measurement periods both before and after trials. Microbial respiration was then subtracted from all fish respiration measurements, assuming a linear increase in microbial respiration over time (Rodgers et al., 2016).
Once focal individuals had completed both the aloneand group-testing trials, maximum metabolic rate (MMR) was measured in separate trials, so that each individual's aerobic scope (AS) could be calculated. The AS is an individual's aerobic metabolic capacity, which indicates the available energy that an individual has for all aerobic processes beyond basic maintenance (Farrell, 2016). The MMR was measured using the chase protocol, in which individuals are exercised to exhaustion through manual chasing (Roche et al., 2013). Although this method may not always provide the highest estimates of MMR (Roche et al., 2013), it is an accepted and repeatable method for determining a relative value for MMR between individuals. Fish were considered exhausted when they no longer responded to chasing by burst swimming. Fish were then air exposed for 30 s to ensure that they had depleted all endogenous oxygen stores. Individuals were then transferred to their respective respirometry chambers, and oxygen uptake was measured for 8-10 min (this time frame was used to ensure that oxygen saturation in the water remained >80% air saturation; Hughes, 1973). This method elicits anaerobic exercise in individuals, and maximal rates of oxygen uptake were measured during subsequent recovery. The MMR was measured for all fish in an alone-testing treatment. These oxygen uptake slopes were measured at 3 min intervals, with the greatest oxygen uptake during this period taken as MMR.
Three measures of metabolic rate were analysed. First, the minimal measured metabolic rate in fish exposed to each treatment (MR min ) was estimated using the protocol typically employed to measure SMR in the literature. This was accomplished by taking MR min as the lowest 10th percentile of all M O2 measurements (Killen, 2014;Chabot et al., 2016), and comparisons were drawn between individuals tested alone and with a group. Second, routine metabolic rate (RMR; the metabolic rate of an undisturbed animal, including costs of random activity) was calculated as the meanṀ O2 excluding the first 5 h in the respirometer, and differences between fish tested alone (RMR alone ) and fish tested in groups (RMR group ) were assessed (Killen et al., 2011). These 5 h were excluded from RMR calculations because pilot trials determined thatṀ O2 in C. viridis takes an average of 5 h to stabilize (SS. Killen, LE. Nadler, MI. McCormick, unpublished data). Third, individuals' response to stress was also determined by using the first slope (FS) of each alone-and group-testing trial, following transfer to the respirometer. The stress response was calculated in the context of AS (AS = MMR − MR min ), in order to determine the proportion of AS that fish were using in response to stress (the stressor in this case being handling stress during transfer to the respirometer). The initial stress response (ISR) was therefore calculated using the following equation: TheṀ O2 is commonly used as an indicator of stress and reaction to threats, such as predation, because of the previously established link between oxygen uptake and stress hormones, including cortisol and epinephrine, with oxygen uptake increasing as the concentration of stress hormones rises (e.g. Brown et al., 1982;Morgan and Iwama, 1996). In the present study, the stressor was the handling stress induced during transfer to the respirometer and any stress of being in isolation.
Statistical analysis
Statistical analysis was conducted in the R Statistical Environment (v. 3.2.4) using the packages 'nlme ', 'multcomp', 'lme4' and 'car' (Bates and Maechler, 2009;R Development Core Team, 2015;Pinheiro et al. 2016). For experiment 1, three separate models were conducted, to determine the preference for the familiar shoal within each treatment (as measured by the proportion of time spent with the familiar shoal). As the null hypothesis is 0.5 (which would indicate no preference for either shoal), the deviation from 0.5 for each observation was used as the response variable, and differences in deviation from 0 were assessed in general linear mixed-effects models (LMMs), with shoal number as a random effect (so that each individual was nested within their experimental shoal). Differences in activity (total shoal visits) between treatments were tested using an LMM, with CO 2 treatment as a fixed effect and shoal number as a random effect. To ensure that all assumptions were met, homogeneity of variance and normality were assessed through visual inspection of the residual and quantile-quantile (Q-Q) plots, respectively. No transformations were necessary to meet assumptions. Initial shoal choice was tested using an LMM with a binomial distribution, with CO 2 treatment as a fixed effect and shoal number as a random effect.
For experiment 2, differences in the MR min , ISR and activity were analysed using an LMM, with CO 2 treatment and testing treatment (alone or group) as fixed effects, body mass as a covariate (to account for differences in size between individuals), and individual as a random effect. In statistical analysis, whole-animal metabolic rate values were used. In figures, metabolic rate measures were mass corrected by plotting the residual values for each measure from the relationship between the logarithm of body mass (in grams) and the logarithm of metabolic rate (in milligrams of O 2 per hour). Each residual was added to the fitted value for mass = 1.29 g, the mean mass of all fish used in the study. Significant differences in CO 2 treatments (which had three levels) discovered using LMM were investigated further using Tukey's multiple comparisons post hoc tests. Differences in MMR and AS with CO 2 treatment were examined using a generalized linear model (GLM), with body mass as a covariate. For these models, assumptions of homogeneity and normality were again checked through visual inspection of residual and Q-Q plots. No transformations were necessary to conform to these assumptions.
Experiment 2: effect of elevated CO 2 on the calming effect
The MR min tested in a group was significantly lower than MR min tested alone ( Fig. 3a; F 1,26 = 29.01, P < 0.001), regardless of CO 2 treatment ( Fig. 3a; F 2,27 = 0.37, P = 0.698), with 26 out of 30 fish tested exhibiting an average reduction in MR min of 22.8% (the remaining four fish exhibited an average increase in MR min of 10.5% when tested in a group; these four fish were included in all statistical analyses). The interaction between testing and CO 2 treatment was not significant (F 2,26 = 0.71, P = 0.501); however, the magnitude of the . In these panels, metabolic rate measures were mass corrected by using residuals of the relationship between logarithm of the body mass and logarithm of whole-animal metabolic rate added to the fitted value for mass = 1.29 g, the mean mass of all fish used in the study. Error bars are SEM, and n = 10 for all treatments. Asterisks indicate statistical significance (*P < 0.05). Statistical analysis was conducted on wholeanimal metabolic rates, with body mass as a covariate. calming effect was higher in both elevated-CO 2 treatments than it was in control conditions (450 µatm, 13.9 ± 5.6%; 750 µatm, 21.4 ± 4.2%; and 1000 µatm, 19.8 ± 7.3%; Fig. 3a). Elevated-CO 2 treatments produced a trend towards higher ISR ( Fig. 3b; F 2,27 = 2.94, P = 0.069), with differences attributable to a significant increase in ISR from the control to the high-CO 2 treatment (Tukey's test: 450 vs. 1000 µatm, P = 0.028; for all other comparisons, P > 0.05). The ISR was not affected by testing treatment (F 1,26 = 0.27, P = 0.606).
Discussion
Elevated CO 2 disrupted familiarity, but not the calming effect, in C. viridis. As familiarity is important for a range of processes in shoaling fish (Ward and Hart, 2003), many of the benefits of group living may be altered in changing environmental conditions. However, the calming benefit of shoaling on metabolic rate was maintained in high-CO 2 conditions, indicating that the benefits of group living on overall metabolic demand will be likely to persist under projected future pCO 2 . . In these panels, metabolic rate measures were mass corrected by using residuals of the relationship between the logarithm of the body mass and logarithm of wholeanimal metabolic rate added to the fitted value for mass = 1.29 g, the mean mass of all fish used in the study. Error bars are SEM, and n = 10 for all treatments. Asterisks indicate statistical significance (*P < 0.05). Statistical analysis was conducted on whole-animal metabolic rates, with body mass as a covariate. . In these panels, metabolic rate measures were mass corrected by using residuals of the relationship between the logarithm of body mass and logarithm of whole-animal metabolic rate added to the fitted value for mass = 1.29 g, the mean mass of all fish used in the study. Error bars are SEM, and n = 10 for all treatments. Statistical analysis was conducted on whole-animal metabolic rates, with body mass as a covariate.
8
The loss of familiarity with elevated CO 2 could have occurred as a result of a number of possible mechanisms. First, social recognition may have been disrupted if fish lost the sensory abilities necessary for identifying individuals, particularly by olfactory cues (Partridge and Pitcher, 1980;Brown and Smith, 1994;Ward et al., 2002;Munday et al., 2009b). The changes in shoal-mate association found with rising CO 2 in the present study are consistent with previous work that tested for preferences between conspecifics from different reefs (home vs. foreign reef site) in the cardinalfish, Cheilodipterus quinquelineatus (Devine et al., 2012). In that study, fish lost the association for conspecifics from their home reef under elevated CO 2 , suggesting that association preferences generally may be altered. Alternatively, individuals may still be able to recognize familiar shoal-mates, but may simply have lost the preference to shoal with familiar rather than unfamiliar individuals. Many previous studies have established that shoaling fish prefer to group with familiar conspecifics (e.g. Magurran et al., 1994;Griffiths and Magurran, 1997;Bhat and Magurran, 2006;Edenbrow and Croft, 2012), but few have investigated what factors may cause this preference to be lost (Granroth-Wilding and Magurran, 2013). Neural circuitry is likely to contribute to the development of social behaviour and preferences in fish species (Dreosti et al., 2015). As neurotransmitter function may be impaired by elevated pCO 2 conditions (Nilsson et al., 2012;Heuer and Grosell, 2014), this effect may account for the loss of preferential association with familiar shoal-mates. In addition, memory and learning play an integral role in familiarity, by allowing individuals to learn about their shoal-mates and remember their identity. Although it is known that learning is interrupted by elevated CO 2 (Ferrari et al., 2012a;Chivers et al., 2014), no studies have yet examined effects on fish memory. Nevertheless, a disruption to memory could account for the loss of association preference found here in the high-CO 2 treatments.
These mechanisms of familiarity disruption could have a number of ecological implications. If social recognition is disrupted, as a result of either a loss of sensory abilities or a loss of memory, a number of important processes may be affected. First, social learning may be impaired as individuals are unable to distinguish between informed and naïve shoalmates (Swaney et al., 2001). Second, Galhardo et al. (2012) found that personality traits, such as exploratory behaviour and boldness, decrease in fishes in unfamiliar shoals, suggesting that disruption to social recognition could impact fishes' personality traits. Third, defensive behaviours may become less effective, as unfamiliar shoals are slower to react to a predator threat than familiar shoals (Griffiths et al., 2004). Alternatively, if only the preference for the familiar shoal is lost, a range of traits related to shoaling dynamics could be impacted. First, shoal fidelity may decrease, because, without the preference for the familiar shoal, the trade-offs of staying with the familiar shoal vs. migrating to a more suitable, unfamiliar shoal may shift (Muleta and Schausberger, 2013). Second, cooperation between shoal-mates may decrease, because individuals' perception of shoal-mates could shift from that of a collaborator to a competitor in this different social context as the reliability of reciprocal cooperation may be compromised (Granroth-Wilding and Magurran, 2013;Engelmann and Herrmann, 2016).
Given the benefits of familiarity to a range of important shoaling processes, including foraging and social learning (Seppä et al., 2001;Swaney et al., 2001;Atton et al., 2014), we expected the magnitude of the calming effect to suffer under elevated CO 2 . However, unlike familiarity, the calming effect was maintained, and even enhanced, under high CO 2 . This surprising result implies that familiarity and the calming effect may rely on different mechanisms. Previous studies have highlighted the central role of olfactory sensing abilities in social recognition of familiar shoal-mates (Partridge and Pitcher, 1980;Brown and Smith, 1994;Ward et al., 2002), which appear to be more vulnerable to the effects of elevated CO 2 than the visual system (Lönnstedt et al., 2013). Therefore, unlike familiarity, the calming effect may be able to compensate for olfactory impairments using visual cues, as has previously been found for anti-predator behaviours (Lönnstedt et al., 2013). The importance of shoaling to energy budgets could increase in the presence of environmental stressors, as evidenced by the increasing magnitude of the calming effect with higher pCO 2 . Any reduction in metabolic demands (like those induced by shoaling) could aid in coping with the projected rise in energy demand associated with changing environmental conditions. In addition, no effect of CO 2 was found on any of the metabolic traits measured (including MR min , RMR, MMR and AS). Although some studies have indicated an effect of CO 2 on metabolism, most have not, indicating that the results presented here are consistent with many of the studies in the literature (Lefevre, 2016).
The initial physiological reaction to stress increased with high CO 2 . This result is consistent with greater incidences of anxious behaviour in fish exposed to elevated CO 2 (Hamilton et al., 2014). In social species, such as C. viridis, this amplified stress response could stem from the mechanisms presented above for familiarity. If social recognition or memory were lost, individuals may have perceived their shoal-mates to be unfamiliar, owing to the inability to distinguish between individuals, although this effect was not evident between the alone-and group-testing treatments. Stress hormones, such as cortisol, increase when individuals are exposed to an unfamiliar shoal (Yue et al., 2006), which could account for the greater acute stress response that was measured with high CO 2 . Conversely, the increased metabolic stress response may have contributed to the loss of preference for familiar shoal-mates. Shoaling motivation increases with stress and predation risk (Croft et al., 2009;Stier et al., 2013); therefore, the desire to shoal may outweigh the strategic choice to shoal with familiar fish in elevated-CO 2 conditions. No matter what the underlying mechanism is, these results indicate that shoaling may become even more important in altered environmental conditions, with the potential to be used as a behavioural compensatory mechanism (Connell and Ghedini, 2015).
Overall activity (total shoal visits and number of 180°t urns) did not vary in either experiment in response to CO 2 treatment, indicating that differences in activity cannot explain the results found. Previous studies have reported a range of findings on the effect of CO 2 on activity. For instance, Munday et al. (2014a) reported an increase in the activity of reef fish species, and Regan et al. (2016) found a reduction in the activity of a river catfish species. In contrast, Nowicki et al. (2012) found no effect of elevated CO 2 on general activity in clownfish, and Munday et al. (2016) measured no effect in larvae of a pelagic kingfish species. These trends imply that CO 2 may have variable effects on activity depending on a range of traits, such as the natural mobility of the study organism, ontogenetic stage and environmental conditions. As with all ocean acidification research, these results must be viewed in the context in which the study was conducted. This type of study must be conducted in the laboratory in order to expose fish to controlled, elevated-CO 2 conditions. Although every effort is made to make these conditions as realistic as possible, the laboratory setting may impart unknown effects on our results. Importantly, fishes will incrementally reach projected CO 2 conditions over a period of many decades, so there may be the potential for acclimation or adaptation over this time period (Munday et al., 2013b). A longer exposure period to elevated CO 2 might lead to different effects on behaviour. Parental exposure to elevated CO 2 does not appear to ameliorate impairments to a number of relevant behavioural traits and sensory systems (Welch et al., 2014), but whether adaptation could reduce the behavioural effects of high CO 2 over longer time frames is unknown.
Future research should work to tease apart which mechanism (social recognition, preference for familiarity or memory) is more likely to be causing the effect of CO 2 on familiarity. Familiarity is important for many aspects of shoaling dynamics (Swaney et al., 2001;Griffiths et al., 2004), so its disruption may create further carry-over effects on a range of processes. The maintenance of the calming effect in the presence of high CO 2 , however, highlights the complexity of shoal dynamics and illustrates that many processes, in addition to familiarity, influence the benefits of shoaling.
|
v3-fos-license
|
2019-02-13T14:06:24.001Z
|
2015-06-22T00:00:00.000
|
19139917
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://cirworld.com/index.php/ijct/article/download/1849ijct/1811",
"pdf_hash": "988c9e4fa0d01bc24fcf349ffcd2e6188a666561",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:726",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "0c2757697bd4f6ddeeb10c5e2ca9b111b9a2e2e5",
"year": 2015
}
|
pes2o/s2orc
|
HYBRID MODEL OF RSA, AES AND BLOWFISH TO ENHANCE CLOUD SECURITY
Cloud is a term used as a metaphor for the wide area networks (like internet) or any such large networked environment. It came partly from the cloud-like symbol used to represent the complexities of the networks in the schematic diagrams. It represents all the complexities of the network which may include everything from cables, routers, servers, data centers and all such other devices. Cloud based systems saves data off multiple organizations on shared hardware systems. Data segregation is done by encrypting data of users, but encryption is not complete solution. We can do segregate data by creating virtual partitions of data for saving and allowing user to access data in his partition only. In our research work we have used the hybrid combination of RSA, AES and Blowfish for data encryption along with data fragmentation using Gateway.
INTRODUCTION
Cloud Computing is the model for convenient on-demand network access, with minimum management efforts for easy and fast network access to resources that are ready to use. It is an upcoming paradigm that offers tremendous advantages in economic aspects, such as reduced time to market, flexible computing capabilities, and limitless computing power. Popularity of cloud computing is increasing day by day in distributed computing environment. There is a growing trend of using cloud environments for storage and data processing needs. To use the full potential of cloud computing, data is transferred, processed, retrieved and stored by external cloud providers. However, data owners are very skeptical to place their data outside their own control sphere. Their main concerns are the confidentiality, integrity, security and methods of mining the data from the cloud.The Greek myths tell of creatures plucked from the surface of the Earth and enshrined as constellations in the night sky. Something similar is happening today in the world of computing. Data and programs are being swept up from desktop PCs and corporate server rooms and installed in -the compute cloud‖. In general, there is a shift in the geography of computation.Cloud computing is here. With its new way to deliver services while reducing ownership, improving responsiveness and agility, and especially by allowing the decision makers to focus their attention on the business rather than their IT infrastructure, there is no organisation that has not though about moving to the Cloud.
The move to the Cloud is a crucial step for any company, but has to be made with a lot of caution because it could turn against users. Organisations need to clearly understand the benefits and challenges, especially for the most critical applications. There are several concerns but, as shown in an IDC survey about the issues of the Cloud [GEN09], security is the main concern. The question is why security is such a complicated challenge in the decision of moving to the Cloud. The answer is easy: lack of control over their data.
Computing can be described as any activity of using and/or developing computer hardware and software. It includes everything that sits in the bottom layer, i.e. everything from raw compute power to storage capabilities. Cloud computing [1] ties together all these entities and delivers them as a single integrated entity under its own sophisticated management.
Types of Clouds
Clouds are divided into 4 categories:-J u n e 2 2 , 2 0 1 5
SERVICS OF CLOUD MODEL
There are different types of services are provides by cloud models like: Software as a Service(SaaS) [2], Platform as a Service (PaaS) [3], and Infrastructure as a Service (IaaS) [6] which are deployed as public cloud, private cloud , community cloud and hybrid clouds. 1) Software as a Service (SaaS) [2]:-The capability provided to the consumer is to use the some applications which is running on a cloud infrastructure. The applications are accessible from many devices through an interface such as a web browser (e.g., web-based email). The consumer does not control the cloud infrastructure which includes network, and servers, all operating systems, and provides storages.
2) Platform as a Service (PaaS):-PaaS [5] provides all the resources that are required for implementation of applications and all services completely from the Internet. In this no downloading or installing is required of any software. The capability provided to the consumer is to deploy onto the cloud infrastructure [4] .Consumer uses all the applications by using different programming languages and tools which are provide by the provider. Any consumer has not any control on cloud infrastructure including all networks, servers and operating systems, but has control over the applications which they deployed.
3) Infrastructure as a Service (IaaS) [6]:-The capability provided to the consumer isto access all the processing, storage, networks and other many fundamental computing resources [8]. Consumer is able to deploy arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems , storage ,deployed application ,and possibly limited control of select networking components [4].
RESEARCH MOTIVATION
As cloud computing is achieving increased popularity, concerns are being voiced about the security issues introduced through the adoption of this new model. The effectiveness and efficiency of traditional protection mechanisms are being reconsidered as the characteristics of this innovative deployment model differ widely from those of traditional architecture [6] as more and more information on individuals and companies is placed in the cloud, concerns are beginning to grow about just how safe an environment it is.
Cloud based systems saves data off multiple organizations on shared hardware systems. Data segregation is done by encrypting data of users, but encryption is not complete solution. We can do segregate data by creating virtual partitions of data for saving and allowing user to access data in his partition only. Malicious activity monitoring is a tough task in cloud system as logging data might be spread over multiple hosts and data centres. Restricting user to his own virtual partition only will not allow logs to be dispersed allowing access to logs for monitoring easily.
User access is another major concern in restricting user access is a major challenge in cloud based storage system. Use of virtual partition and enhanced user access control in cloud system will allow us to improve data security.
Enhanced Cloud system will be compared with existing secure cloud systems. We will compare enhanced system against security, performance & ease of use.
Three cloud service models (SaaS, PaaS and IaaS) not only provide different types of services to end users but also disclose information security issues and risks of cloud computing systems [9].
• The hackers might abuse the forceful computing capability provided by clouds by conducting illegal activities. IaaS is located in the bottom layer, which directly provides the most powerful functionality of an entire cloud.
• Data loss is an important security risk of cloud models. In SaaS cloud models, companies use applications to process business data and store customer's data in the data centers.
• Traditional network attack strategies can be applied to harass three layers of cloud systems. For example, web browser attacks are used to exploit the authentication, authorization, and accounting vulnerabilities of cloud systems.
RELATED WORK
In the research work titled "Controlling various network based ADOS Attacks in cloud computing environment :By Using Port Hopping Technique". E.S.Phalguna Krishna has proposed the concept that cloud computing security is sub domain of computer security ,network security ,and information security. It refers to a broad set of security policies ,technologies and flow controls deployed to protect data, applications, and associated infrastructure resources of cloud computing .
In the research work titled "Packet Monitoring Approach to Prevent DDoS Attack in Cloud Computing" in March 2012.Sateesh kumar peddoju has presented an approach in cloud Environment to prevent DDoS attacks. This new approach of Hop Count Filtering provides a network independent and readily available solution to prevent DoS attack in cloud environment .Also this method decreases the unavailability of cloud services to legitimate clients ,reduces number of updates and saves computation time .The presented approach is simulated in cloudSim toolkit environment and corresponding result are then produced.
In the research work titled "Data security model for cloud computing" in March 2014.Eman M.Mohamed has proposed the concept of data security for cloud computing. This paper is on the basis of data security in cloud computing,which has always been an important aspect of quality of service,cloud computing focuses oa new challenging security threats. Therefore, a data security model must solve the most challenges of cloud computing security.The proposed data security model provides a single gateway as a platform. J u n e 2 2 , 2 0 1 5 In the research work titled -On Modeling Confidentiality Archetype and Data Mining in Cloud Computing -in March 2013 [11]-Alawode A. olaide‖ has proposed the concept of data mining in the cloud. This paper discus effort directed to which degree this skepticism is justified, by proposing to model Cloud Computing Confidentiality Archetype and Data Mining 3CADM. The 3CADM [10] is a step-by-step framework that creates mapping from data sensitivity onto the most suitable cloud computing architecture and process very large datasets over commodity clusters with the use of right programming model.
In the research work titled-An Approach to protect the privacy of the cloud data from data mining based attacks [12]‖ in April 2013 -Himeldev, Tanmoysen‖ has proposed the concept of privacy of the cloud data from data mining and attacks on the cloud data. We first identify the data mining based privacy risks on cloud data and propose a distributed architecture to eliminate the risks. Cloud data distributor is an entity that receives data from single client, where data is partitioned into multiple parts. These parts are distributed among several cloud providing companies cloud providers. In a nutshell our approach consists of categorization, fragmentation and distribution of data.
In the research work titled-Information Retrieval through Multi -Agent System with Data Mining in Cloud Computing [13]‖ in February 2012 -Vishal Jain and Mahesh Kumar‖ has proposed the concept of retrieving the useful information through Multi-Agent system. The aim of this research paper is to develop a practically implemented research model for the information retrieval using Multi-Agent System with Data Mining technique in a Cloud Computing environment.
In the research work titled -Data mining for high performance data cloud using association rule mining [14]‖ in January 2012 -T.V Mahendra , N. Deepika and N.Keasava Rao ‖ has proposed the concept of mining the data for high performance data cloud using sector/sphere framework with association rules. In this paper we have discussed an algorithm to mine the data from the cloud using sector/sphere framework with association rules. Mining association rules is one of the most important aspects in data mining. Association rules are dependency rules which predict occurrence of an item based on occurrences of other items.
In the research work titled-Data mining in the cloud computing [15]‖ in April 2012 -Bhagyashree Ambulkar and Vaishali Borkar‖ has proposed the concept of mining of the data from the cloud. This paper deals with the study of how data mining is used in cloud computing. Data Mining is a process of extracting potentially useful information from raw data. How SaaS [6] is very useful in cloud computing. The integration of data mining techniques into normal day-to-day activities has become common place. We are confronted daily with targeted advertising, and businesses have become more efficient through the use of data mining activities to reduce costs.
In the research work titled-Cloud Computing: An overview [16]‖ in September 2013 -Eng. Anwar J. Alzaid and Eng. Jassim M. Albazzaz‖ has proposed the concept of cloud computing in detail. Cloud computing is a relatively new term, it refers to a new way of processing and storing information this new style of processing promises to offer a huge amount of computing power to its users without requiring them to invest in expensive hardware. This paper is a brief survey based on readings on cloud computing, it will provide an overview of the basic concepts, definitions, and outlines of the general architecture of this technology.
In the research work titled-Mitigating Data Mining Attack in Cloud [17]‖ in April 2014 -A. Raja Rajeswari and R.Sakkaravarthi‖ has proposed the concept of data mining based privacy attacks in the cloud. As an alternative of maintaining personal data on the own hard drive or updating important applications for user needs, user can use a service over the network, to a different location, to store user information and / or use its applications.
RESEARCH OBJECTIVES
Various data analysis techniques are available now a day that are successfully extract valuable information from a large volume of data. These analysis techniques are being used by cloud service providers. Attackers can use these techniques to extract valuable information from the cloud.
By distributing data on different clouds it introduces performance overhead when client needs to access all data frequently, e.g. client needs to perform a global data analysis on all data. The analysis may have to access data from multiple locations, with a degraded performance.
By simply using in single cloud provider can having the following main issues: Less Security. Loss of data; No privacy; Cost of maintenance is high. Uploading data on distributed cloud providers: -Although this scenario will protect the client's data as the data will be distributed to the different cloud providers. But it will increase the cost to the client as purchasing different cloud will increase the cost. But using only single cloud also has the issues. So by using single cloud and then dividing the single cloud into multiple zones overcomes the problem of cost and privacy.
Here; user will create his/her own account at the cloud Provider. Cloud Provider will assign the different privileges to the user depending upon the role of the user. Different access policies for different zones will be implemented over here. If the user has been assigned a role as a Read, then he/she can only read the data from the server. If the policy allows writing the data, then only user can write the data into the server. The file sent by the user is stored into the multiple zones available at the server. If the company tries to perform the mining at the user's data, then proper results will not be available.
We will be implementing cloud security aspects for data mining by implementing cloud system. After implementing cloud infrastructure for data mining for cloud system we shall be evaluating security measure for data mining in cloud. We will be fixing threats in data mining to Personal/private data in cloud systems. J u n e 2 2 , 2 0 1 5
METHODOLOGY
This thesis aims to provide an understanding of the different attack vectors created by multi-tenancy and virtualization in a public IaaS cloud. The vectors will be explored, focusing on the threats arisen from different tenants coexisting in the same physical host. A critical analysis of the different vectors will be provided along with guidance on how to approach them. This analysis will be performed using previous works from different entities and authors, along with personal knowledge obtained from experience. As part of the aim of this research, a strong foundation will be provided on the terms of cloud computing, multi-tenancy and virtualization. All these areas will be explored giving a strong definition.
The different security issues will be also explored in order to provide an introduction to the main focus of the research.
Client will upload the file that has to be sent to the cloud provider RSA encryption will be performed at the client side before sending the file to the cloud provider and hence preventing the system from the man-in-the-middle attack.
This encrypted data is further sent to the gateway.
Gateway will receive the file sent by the client and will perform the AES ( Advanced Encryption Standard ) on it.
Gateway will further distribute the file into multiple fragments and store the name of the files in the distribution table. Afterwards Gateway will transfer all the splitted files to the cloud provider for storage.
Cloud Provider will receive the file and will apply the BlowFish algorithm on all the fragmented files received from the gateway.
So, Using this approach, we have achieved two purposes.
If anyone tries to hack the data while transferring from client to the gateway, he/she will get only encrypted data.
If anyone tries to perform the mining on the files stored at the cloud provider, no results will be retrieved.
During Downloading the file from cloud end, the client will follow the following steps: 1. Client will ask the gateway to download his/her stored file.
2. Gateway will forward the request to the cloud provider and cloud provider will apply the BlowFish decryption on all the fragmented files and will send all the stored splitted files of that client to the gateway.
3. Gateway will receive all the files and will try to combine them into a single file.
4. After linking of all the files, gateway will apply the AES decryption on the single linked file. After applying the decryption, gateway will send this file to the client.
5. Client will further perform the RSA decryption to fetch the stored data inside the file. From the above bar chart, it is clear that the cost has been reduced. Usually Cloud Computing providers have detailed costing models which are used to bill users on pay per use basis The Cost depends upon the size of the file. As the size of the file increases, the Cost will also increase. But we have been able to reduce the Cost of the proposed work as it will finally increase the overall efficiency of the system.
Figure 4. File Size v/s Parts
From the above bar chart, it is clear that as the file size increases number of partitions of the file also goes on increasing which helps to secure data at the cloud end. As data is not present in one file so it cant be hacked by the third party easily which hence ensuring the data security.
CONCLUSION
The primary conclusion of our research is that adoption of user-centric security models and shifting certain parts of communication and computation to the client side allows us to provide the cloud consumers with more visibility and control over their resources. Therefore, using this approach not only the security and privacy concerns of cloud consumers can be addressed more effectively, but also the burden of managing end-users' identities and access control will be reduced from cloud service providers.
This study collectively describes cloud computing security challenges in general and describes the mitigation practices that have been proposed to handle the identified challenges. We have successfully implemented the above proposed system and has reached to a solution that by splitting the files into multiple fragments we can achieve the better security in cloud computing. However, most important future work identifies here is that there are concrete standards for cloud computing security still missing. There are some open cloud manifesto standards and few efforts made by the cloud security alliance to standardize the process in the cloud. The cloud vendors and users do not encourage the usage of these standards as they are restrictive. In addition to this the cloud computing with such great offering such as storage, infrastructure and application designing capabilities on the go to the IT industry still fail to have proper standards for interoperability with other cloud service providers. This failure to provide concrete security standards, common underlying framework for data migration and global standards for cloud interoperability, make the leading technology the cloud computing" still a vulnerable option for aspiring users.
|
v3-fos-license
|
2018-04-03T02:07:31.541Z
|
1984-07-10T00:00:00.000
|
32327780
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/s0021-9258(17)39745-4",
"pdf_hash": "64345f5a1ac68ae19f8c3fe11f50a08a06136d24",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:727",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "bfc23ceb4ca5bbf2cf7926b6c320e512eed9fd5f",
"year": 1984
}
|
pes2o/s2orc
|
The Role of Calcium in Phospholipid Turnover following Glucose Stimulation in Neonatal Rat Cultured Islets*
Phospholipid turnover was studied in cultured neonatal rat pancreatic islets. In islets prelabeled with [""PIPi, 15-min stimulation with glucose (16.7 mM) caused increased labeling of phosphatidic acid (93%) and phosphatidylinositol (94%) and decreased labeling of the polyphosphoinositides (20%). Omission of cal- cium ion during the period of glucose stimulation did not modify the changes in inositol phospholipids. In islets equilibrated with [""PIPi in the presence and absence of stimulatory glucose concentrations (1 1.1 and 1.7 mM, respectively), chelation of calcium by ethylene glycol his(@-aminoethyl ether)-N,N,N',N'- tetraacetic acid prevented the increase in phosphatidic acid and phosphatidylinositol labeling. However, the decrease in polyphosphoinositide labeling was inhibited by the chelator only in islets labeled in the absence of stimulatory glucose concentrations, the decrease persisting in islets labeled in the presence of glucose. This suggests that a specific pool of polyphosphoinositides is labeled in the presence of agonist and decreases in response to acute glucose stimulation irrespective of availability of external calcium. In the absence of calcium, the addition of [y-""Pi]ATP to a membrane preparation of cultured islets yielded three lipid phosphorylation products (phosphatidic acid, phosphatidylinositol 4-monophosphate, and phos- phatidylinositol
The Role of Calcium in Phospholipid Turnover following Glucose Stimulation in Neonatal Rat Cultured Islets* (Received for publication, February 21,1984) Marjorie E. Dunlop Phospholipid turnover was studied in cultured neonatal rat pancreatic islets. In islets prelabeled with [""PIPi, 15-min stimulation with glucose (16.7 mM) caused increased labeling of phosphatidic acid (93%) and phosphatidylinositol (94%) and decreased labeling of the polyphosphoinositides (20%). Omission of calcium ion during the period of glucose stimulation did not modify the changes in inositol phospholipids. In islets equilibrated with [""PIPi in the presence and absence of stimulatory glucose concentrations (1 1.1 and 1.7 mM, respectively), chelation of calcium by ethylene glycol his(@-aminoethyl ether)-N,N,N',N'tetraacetic acid prevented the increase in phosphatidic acid and phosphatidylinositol labeling. However, the decrease in polyphosphoinositide labeling was inhibited by the chelator only in islets labeled in the absence of stimulatory glucose concentrations, the decrease persisting in islets labeled in the presence of glucose. This suggests that a specific pool of polyphosphoinositides is labeled in the presence of agonist and decreases in response to acute glucose stimulation irrespective of availability of external calcium.
In the absence of calcium, the addition of [y-""Pi]ATP to a membrane preparation of cultured islets yielded three lipid phosphorylation products (phosphatidic acid, phosphatidylinositol 4-monophosphate, and phosphatidylinositol 4,5-bisphosphate). In broken cell preparations, [32P]Pi-labeled phosphatidylinositol was also detected. The extent of all these phosphorylations was decreased by the presence of free calcium ion (40 CIM) .
These data indicate that polyphosphoinositide turnover takes place after glucose stimulation independent of extracellular calcium and support the possibility that this may play a primary role in altering cell calcium availability.
Extensive investigations have established that glucose-induced insulin release requires an increase of calcium ions (Ca") within the pancreatic p cell (review in Refs. 1 and 2) and that intracellular and extracellular sources may contribute to raised free cytosolic Ca2+ on stimulation (3)(4)(5). While the inter-relationship of islet calcium sources is complex, it is possible that both intra-and extracellular Ca2+ availability may be affected by glucose-induced changes in islet phospho-Iipids and accompanying changes in the plasma membrane microenvironment. The precedent is seen in a number of * This project was supported by the National Health and Medical Research Council of Australia. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "aduertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. tissues (reviewed in Refs. [6][7][8] in which hormones which exert their effects through mobilization of Ca2+ show coincident changes in phosphatidylinositol metabolism. In these diverse systems, a role for inositol phospholipids in the maintenance and disposition of cellular calcium seems likely. Furthermore, the polyphosphoinositides phosphatidylinositol 4-monophosphate and phosphatidylinositol 4,5-bisphosphate formed in the plasma membrane from phosphatidylinositol by the action of specific kinases have been shown to be rapidly degraded by a number of calcium-mobilizing stimuli (9-11). Similarly, insulin secretagogues glucose (12-15), leucine, and arginine (16) stimulate the metabolism of various 4 cell phospholipids with enhanced PI' metabolism and catabolism of its polyphosphorylated derivates shown in response to glucose. The calcium dependency of agonist-induced breakdown of PI, located in most membrane systems of the cell, and the polyphosphoinositides, located primarily in the plasma membrane (6), has been studied in many systems (8). In most of these, interpretation of calcium dependence remains equivocal. A requirement for calcium has been inferred in the adult islet as no phosphoinositide turnover was seen in the absence of Ca2+ and the presence of EGTA (13,14). In studies employing chelators, it is necessary to consider the capacity of EGTA to deplete intramembrane stores from which Ca2+ may be released by agonists (17). We have demonstrated that glucose stimulation of neonatal islet cells affects a plasma membrane complement of Ca2+ ionophoretic lipids (18), which may be an indication of a capacity for glucose to alter intramembrane calcium.
The following study was undertaken to determine the changes in neonatal islet inositol phospholipids following glucose stimulation and to establish whether these changes are a primary response independent of Ca2+ or whether they require the presence of Ca2+, which could indicate a response secondary to calcium entry.
Materials-[32P]
Pi and [Y-~'P,]ATP were obtained from The Radiochemical Centre, Amersham, England. RPMI 1640 medium and HEPES were from Flow Laboratories, Inc. All phospholipids were from Sigma. Precoated silica gel plates were from Merck. All other chemicals and solvents were from BDH Chemicals Ltd. (AnalaR grade).
8407
Glucose-induced Phospholipid Turnover transferred to RPMI 1640 medium supplemented with the salt concentration of HEPES-buffered Krebs-Ringer bicarbonate buffer (pH 7.4) containing glucose (1.7 RIM) for 20 min. For stimulation, glucose was added to this medium. In calcium-free medium, CaClz was omitted and replaced by NaCl. When present, EGTA was added to this medium to a final concentration of 5 mM. Insulin concentration of supernatant medium was determined by radioimmunoassay (20).
Measurement of p2P]P,-labeled Phospholipids in Glucose-stimulated Islets-Islets free of supernatant medium were extracted in ch1oroform:methanol:concentrated HCI (2001001, v/v). The organic phase generated following the addition of 0.6 volume of 1 M KC1 was dried under N,. Acidic phospholipids were separated by solid phase chromatography in a column of neomycin coupled to a glycophase support as described by Schacht (21). On these columns, PS, PA, PI, PI-4-P, and PI-4,5-P2 are obtained as discrete fractions. The identities of these phospholipids were verified by thin layer chromatography using Silica Gel 60-precoated plates prerun in 1% methanolic potassium oxalate as described by Shaikh and Palmer (22). The initial fractions from the solid phase system contained the nonacidic phospholipids PC and PE which were separated by thin layer chromatography (23).
Phospholipid Phosphorylation in Broken Cell and Islet Membrane Suspension-Islets were washed with phosphate-buffered saline and homogenized in 10 volumes of sucrose (0.35 M) in a hand-held TenBroeck homogenizer. An aliquot of this suspension served as a broken cell preparation. Nuclei and cell debris were removed from a second aliquot by centrifugation (600 X g, 5 min). Following centrifugation of the supernatant (20,000 X g, 20 rnin), a membrane particulate pellet was resuspended in sucrose. Protein content was determined by the method of Bradford (24). Phospholipid phosphorylation was determined using 5 pl of cell extract preincubated in Na acetate buffer (pH 6.8, 50 mM) containing 10 mM Mg acetate, 1 mM EGTA, and 0.2-2.0 mM CaCl, (final volume 60 pl). The reaction was started by the addition of 40 p~ [y-32Pi]ATP (10 pCi/ml).
In additional experiments, a sonicate of PI or diolein (0.5 pg in Na acetate buffer) was added to the preincubated cell extract prior to the addition of [y-32Pi]ATP.
After incubation at 25 "C, the reaction was terminated by addition of acidic chloroform:methanol, two phases were generated as described above, and phosphorylated products were separated by thin layer chromatography (22). Dried plates were autoradiographed to localize PI-4,5-P2, PI-4-P, PI, and PA, separated with RF values of 0.19,0.27, 0.46, and 0.78, respectively. The gel was removed in 2-mm bands and extracted into scintillation mixture for determination of "P, content.
RESULTS
The time course of incorporation of ["PIP, into islet phospholipids is shown in Fig. 1 Over the 15-min stimulation, PC, PS, and PI labeling occurs gradually, while the increased labeling of P A and the decrease in polyphosphoinositides occur more rapidly (within the first 5 min of stimulation). These phosphoinositide changes in islets equilibrated with ["PIP, in the presence of stimulatory (11.1 mM) or nonstimulatory (1.7 mM) glucose prior to the acute stimulation are further shown in Table 11. A difference in the basal labeling of the phosphoinositides is seen. In those islets equilibrated in the presence of glucose ( the basal labeling is seen as decreased PI labeling and increased PA labeling of glucose-equilibrated islets. In both these equilibration conditions, however, acute glucose stimulation increased PI and PA labeling and decreased the labeling of PI-4-P and PI-4,5-P2, as seen for islets equilibrated with glucose for 24 h.
Omission of Ca2+ during acute glucose stimulation had no effect on the enhancement of [32P]PA formation or the relabeling of PI, and the decrease in both polyphosphoinositides was still apparent. When in addition to the omission of Ca2+ 5 mM EGTA was present, PA and PI labeling were not increased by acute glucose stimulation, but a decrease in the polyphosphoinosites was still apparent in islets equilibrated in the presence of stimulatory glucose concentration but not in those islets equilibrated with [32P]Pi at nonstimulatory glucose concentrations. In both these calcium-free conditions, glucose failed to increase insulin release significantly above that seen in the absence of stimulatory glucose concentrations.
In vitro phosphorylation from [y-32Pi]ATP in broken cell and membrane preparations (Fig. 2)
TABLE I1
Effect of calcium removal on [32P]Pi labeling of inositol phospholipids and phosphatidic acid in response to acute glucose stirnulation following prelabeling in the presence of stimulatory and nonstimulatory glucose concentrations Islets were incubated in RPMI 1640 medium modified to bicarbonate-buffered Krebs solution containing glucose (1.7 or 16.7 mM) for 15 min. Ca2+ was omitted in the presence and absence of EGTA (5 mM). Inositol phospholipids were determined following thin layer chromatography. Values shown are mean +. S.E. (n = five to eight observations). For each prelabeling condition, the statistical significance of the difference from control (1.7 mM glucose) is indicated by an asterisk ( p < 0.05), a double asterisk (p < 0.01), and a triple asterisk ( p < 0.005).
DISCUSSION
This study has shown that glucose induces a sequence of events in cultured neonatal rat islets which indicates phosphatidylinositide phosphodiesteratic cleavage to form diacylglycerol, its phosphorylation to form phosphatidic acid, and the resynthesis of PI through cytidylphosphointermedlates. This confirms previous findings in mature islets. The time course studies also support the finding of Laychock (14) that polyphosphoinositide hydrolysis is an early event in glucoseinduced insulin secretion. The role of Ca2+ in PI and polyphosphoinositide turnover in different tissues has been controversial. In adult rat islets, it has been reported that PI and polyphosphoinositide turnover measured as [32P]Pi labeling (14) or inositol phosphate release (15) is markedly inhibited by the removal of Ca2+ with the addition of EGTA to the extracellular medium. It was therefore inferred that this phosphoinositide turnover is dependent on an influx of calcium. Using similar conditions of labeling with ["PIPi as employed by Laychock (14), similar results were obtained in the present study. However, when phosphoinosityde pools were labeled in the presence of stimulatory concentrations of glucose, quite different findings resulted. Even in the absence of extracellular Ca2+, and the presence of a high concentration of EGTA, polyphosphoinositide loss was induced by acute exposure to glucose. This suggests that when labeling is carried out in the presence of agonist (stimulatory glucose concentrations), a specific pool of phosphoinositides, inaccessible to EGTA and possibly at an inner membrane leaflet site, is labeled. This may be analogous t o the situation in other tissues where agonist labeling reveals a specific pool of PI which is hormone-responsive (29-31) and Ca2+-independent (31). In the islet, PI turnover itself remains dependent on extracellular Ca2+ even with agonist labeling, but polyphosphoinositide breakdown does not.
The role of Ca2+ in regulating phosphoinositide turnover was investigated further by looking at the incorporation of [32P]Pi from [Y-~'P~]ATP into phospholipids in membrane preparations and homogenates to assess endogenous kinase activity. Phosphorylation of PI to form PI-4-P and PI-4,5-P2 end of diacylglycerol to form PA was demonstrated in membrane preparations, with resynthesis of PI demonstrable only in homogenates. The net formation of the polyphosphoinosi-tides and of PA was shown in membrane preparations to be inhibited by Ca2+. This may reflect the sensitivity to ea2+ of islet phosphodiesterases and phospholipases as described for liver (321, brain (33), lymphocytes (34), smooth muscle (35), whole pancreas (36), and platelets (37). However, Ca2+ inhibition of the kinases involved may also contribute as microsomal diacylglycerol kinase of rat liver is inhibited by elevated Ca2+ (38). The findings described in the present report carry the following implications. As the polyphosphoinositides are chelators of both Ca2+ and Mg2+ (39), a change in their amount relative to other phospholipids located at internal cell membranes may change the amount of ea2+ bound to the plasma membrane. However, it must be remembered that using the techniques currently employed to measure phosphoinositide turnover, it is not possible to establish whether the pool size is sufficient to effect changes in membrane ea2+ availability and/or disposition. The finding of an absolute dependence on extracellular Ca2+ of PA and PI turnover but not of polyphosphoinositide breakdown may indicate that the latter glucoseinduced phospholipid turnover may be an initial event following glucose stimulation, which precedes Ca2+ entry into the islet. The sensitivity of polyphosphoinositide reaccumulation to Ca2+ demonstrated in the membrane preparations using [y-32Pi]ATP would indicate that while intracellular ea2+ levels remain high, polyphosphoinositide reaccumulation is prevented. In support of the sequence described are the ultrastructural studies of the mature pancreatic islet which show an accumulation of calcium closely associated with the plasma membrane which is depleted following glucose stimulation (40).
The current study emphasises the complexity of the membrane-associated inositol phospholipid pools and the importance of agonist labeling in revealing a specific, glucoseresponsive, and extracellular Ca2+-independent pool of polyphosphoinositides in the pancreatic islet. In thrombin-stimulated platelets, a loss of PI-4,5-P2 has been shown to precede Ca'+ mobilization, phospholipase activation, the formation of arachidonate metabolites, and alteration in polymerization of cytoskeletal elements (41). By analogy, the breakdown of polyphosphoinositides in the agonist-labeled pool in the neonatal islet may be an initiating step integral to glucoseinduced insulin release.
|
v3-fos-license
|
2019-04-27T13:10:09.020Z
|
2019-01-16T00:00:00.000
|
264878331
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://hess.copernicus.org/articles/23/225/2019/hess-23-225-2019.pdf",
"pdf_hash": "c35e34b76b8887240f84c1cf1a3f3dcdf9002281",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:728",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"sha1": "c35e34b76b8887240f84c1cf1a3f3dcdf9002281",
"year": 2019
}
|
pes2o/s2orc
|
Stochastic reconstruction of spatio-temporal rainfall patterns by inverse hydrologic modelling
Knowledge of spatio-temporal rainfall patterns is required as input for distributed hydrologic models used for tasks such as flood runoff estimation and modelling. Normally, these patterns are generated from point observations on the ground using spatial interpolation methods. However, such methods fail in reproducing the true spatio-temporal rainfall pattern, especially in data-scarce regions with poorly gauged catchments, or for highly dynamic, small-scale rainstorms which are not well recorded by existing monitoring networks. Consequently, uncertainties arise in distributed rainfall–runoff modelling if poorly identified spatio-temporal rainfall patterns are used, since the amount of rainfall received by a catchment as well as the dynamics of the runoff generation of flood waves is underestimated. To address this problem we propose an inverse hydrologic modelling approach for stochastic reconstruction of spatio-temporal rainfall patterns. The methodology combines the stochastic random field simulator Random Mixing and a distributed rainfall–runoff model in a Monte Carlo framework. The simulated spatio-temporal rainfall patterns are conditioned on point rainfall data from ground-based monitoring networks and the observed hydrograph at the catchment outlet and aim to explain measured data at best. Since we infer a threedimensional input variable from an integral catchment response, several candidates for spatio-temporal rainfall patterns are feasible and allow for an analysis of their uncertainty. The methodology is tested on a synthetic rainfall– runoff event on sub-daily time steps and spatial resolution of 1 km2 for a catchment partly covered by rainfall. A set of plausible spatio-temporal rainfall patterns can be obtained by applying this inverse approach. Furthermore, results of a real-world study for a flash flood event in a mountainous arid region are presented. They underline that knowledge about the spatio-temporal rainfall pattern is crucial for flash flood modelling even in small catchments and arid and semiarid environments.
Abstract.Knowledge of spatio-temporal rainfall patterns is required as input for distributed hydrologic models used for tasks such as flood runoff estimation and modelling.Normally, these patterns are generated from point observations on the ground using spatial interpolation methods.However, such methods fail in reproducing the true spatio-temporal rainfall pattern, especially in data-scarce regions with poorly gauged catchments, or for highly dynamic, small-scale rainstorms which are not well recorded by existing monitoring networks.Consequently, uncertainties arise in distributed rainfall-runoff modelling if poorly identified spatio-temporal rainfall patterns are used, since the amount of rainfall received by a catchment as well as the dynamics of the runoff generation of flood waves is underestimated.To address this problem we propose an inverse hydrologic modelling approach for stochastic reconstruction of spatio-temporal rainfall patterns.The methodology combines the stochastic random field simulator Random Mixing and a distributed rainfall-runoff model in a Monte Carlo framework.The simulated spatio-temporal rainfall patterns are conditioned on point rainfall data from ground-based monitoring networks and the observed hydrograph at the catchment outlet and aim to explain measured data at best.Since we infer a threedimensional input variable from an integral catchment response, several candidates for spatio-temporal rainfall patterns are feasible and allow for an analysis of their uncertainty.The methodology is tested on a synthetic rainfallrunoff event on sub-daily time steps and spatial resolution of 1 km 2 for a catchment partly covered by rainfall.A set of plausible spatio-temporal rainfall patterns can be obtained by applying this inverse approach.Furthermore, results of a real-world study for a flash flood event in a mountainous arid region are presented.They underline that knowledge about the spatio-temporal rainfall pattern is crucial for flash flood modelling even in small catchments and arid and semiarid environments.
Motivation
The importance of spatio-temporal rainfall patterns for rainfall-runoff (RR) estimation and modelling is well known in hydrology and has been addressed by several simulation studies, especially since distributed hydrologic models have become available.Many of those studies demonstrated the effect of resulting runoff responses for different spatial rainfall patterns (Beven and Hornberger, 1982;Obled et al., 1994;Morin et al., 2006;Nicotina et al., 2008) or addressed the errors in runoff prediction and the difficulties in parameterisation and calibration of hydrologic models if the spatially distribution of rainfall is not well known (Troutman, 1983;Lopes, 1996;Chaubey et al., 1999;Andreassian et al., 2001).As a consequence, studies were performed to investigate configurations of rainfall monitoring networks (Faures et al., 1995) and rainfall errors and uncertainties for hydrologic modelling (McMillan et al., 2011;Renard et al., 2011).
In general, rainfall monitoring networks based on point observations on the ground (station data) require interpolation methods to obtain spatio-temporal rainfall fields usable for distributed hydrologic modelling.Traditional interpolation methods fail in reproducing the true spatiotemporal rainfall pattern, especially for (i) data-scarce re-gions with poorly gauged catchments and low network density; (ii) highly dynamic, small-scale rainstorms which are not well recorded by existing monitoring networks; and (iii) catchments which are partly covered by rainfall.Consequently, uncertainties are associated with poorly identified spatio-temporal rainfall patterns in distributed rainfallrunoff-modelling since the amount of rainfall received by a catchment as well as the dynamics of runoff generation processes are typically underestimated by current methods.
The effects of poorly estimated spatio-temporal rainfall fields are visible in particular for semiarid and arid regions, where rainstorms show a great variability in space and time and the density of ground-based monitoring networks is sparse compared to other regions (Pilgrim et al., 1988).
Based on an analysis of 36 events in a mountainous region of Oman, McIntyre et al. (2007) show a wide range of eventbased runoff coefficients, which underlines that achieving reliable runoff predictions by using hydrologic models in those regions is extremely challenging.This is supported by several simulation studies (Al-Qurashi et al., 2008;Bahat et al., 2009), who address the uncertainties in model parameterisation due to uncertain rainfall input.In this context Gunkel and Lange ( 2012) report that reliable model parameter estimation was only possible by using rainfall radar.However, this information is not available everywhere.
To address the inherent uncertainties described above, stochastic rainfall generators are used intensively to create spatio-temporal rainfall inputs for distributed hydrologic models to transform rainfall into runoff.A large amount of literature exists describing different approaches for spacetime simulation of rainfall fields, including multi-site temporal simulation frameworks (Wilks, 1998), approaches based on the theory of random fields (Bell, 1987;Pegram and Clothier, 2001), or approaches based on the theory of point processes and its generalisation, which includes the popular turning-band method (Mantoglou and Wilson, 1982).Enhancements were made in order to portray different rainstorm patterns and distinct properties of rainfall fields, like spatial covariance structure, space-time anomaly, and intermittency (see Leblois and Creutin, 2013;Paschalis et al., 2013;Peleg et al., 2017).
Applications of spatio-temporal rainfall simulations together with hydrologic models are straightforward Monte Carlo types, where a large number of potential rainfall fields are generated driven by stochastic properties of observed rainstorms or longer time series.These fields are used as inputs for distributed hydrologic model simulations to investigate the impact of certain aspects of rainfall like uncertainty in measured rain depth, spatial variability, etc., on simulated catchment responses.Rainfall simulation applications are performed in unconditional mode (reproducing rain field statistics only) or conditional mode, where observations (e.g. from rain gauges) are reproduced too.The latter are commonly used for investigating the effect of spatial variability using fixed total precipitation and variations in spatial patterns (Krajewski et al., 1991;Shah et al., 1996;Casper et al., 2009;Paschalis et al., 2014).However, stochastic rainfall simulations in combination with distributed hydrologic modelling can be computationally demanding and can fail at matching the observed streamflow if rainfall fields are conditioned on rainfall point observations only.
On the other hand, inverse hydrologic modelling approaches have been developed to estimate rainfall time series based on observed streamflow data.Those approaches require either an inversion of the underlying mathematical equations for the non-linear transfer function (Kirchner, 2009;Kretzschmar et al., 2014) or an application of the hydrologic model in a Bayesian inference scheme (Kavetski et al., 2006;Del Giudice et al., 2016).Up to now, both approaches deliver time series of catchment-averaged rainfall only, which gives no idea about the spatial extent and distribution of rainfall.This is particularly important when considering events such as localised rainstorms, which might be underestimated and not accurately portrayed.
The goal here is an event-based reconstruction of spatiotemporal rainfall patterns which best explains measured point rainfall data and catchment runoff response.For that we looked for potential candidates for rainfall fields for subdaily time steps and spatial resolution of 1 km 2 which, to our knowledge has not been done so far.To achieve this task, we combined stochastic rainfall simulations and distributed hydrologic modelling in an inverse modelling approach, where spatio-temporal rainfall patterns are conditioned on rainfall point observations and observed runoff.The methodology of the inverse hydrologic modelling approach consists of the stochastic random field simulator Random Mixing and a distributed rainfall-runoff model in a Monte Carlo framework.Until now, Random Mixing, developed by Bárdossy and Hörning (2016b) for solving inverse groundwater modelling problems, has been used by Haese et al. (2017) for reconstruction and interpolation of precipitation fields using different data sources for rainfall.
After this introduction the methods are described in Sect. 2. It gives an overview of the methodology and further details for the applied rainfall-runoff model, the Random Mixing and its application for rainfall fields.Section 3 aims to test the methodology.A synthetic test site is introduced which is used to demonstrate and discuss (i) the limits of common hydrologic modelling approaches (using rainfall interpolation) and (ii) the shortcomings of rainfall simulations which are not conditioned on the observed runoff.In contrast, the functionality of the inverse hydrologic modelling approach is illustrated and discussed.In Sect.4, the inverse hydrologic modelling approach is applied for realworld data by an example of an arid mountainous catchment in Oman.The test site is introduced and results are shown and discussed.Finally, summary and conclusions are given in Sect. 5.
General approach
The methodology described here can be characterized as an inverse hydrologic modelling approach.It aims to infer potential candidates for the unknown spatio-temporal rainfall patterns from runoff observations at the catchment outlet, known parameterisation of the rainfall-runoff model, and rain gauge observations.The approach combines a gridbased spatially distributed rainfall-runoff model and a conditional random field simulation technique called Random Mixing (Bárdossy and Hörning, 2016a, b).Random Mixing is used to simulate a conditional rainfall field which honours the observed rainfall values as well as their spatial and temporal variability.Afterwards, an optimisation is performed to additionally condition the rainfall field on the observed runoff.Therefore, the initial field is used as input to the rainfall-runoff model.The deviation between the simulated runoff and the observed runoff is evaluated based on the model efficiency (NSE) defined by Nash and Sutcliffe (1970).To minimise this deviation the rainfall field is mixed with another random field which exhibits certain properties such that the mixture honours the observed rainfall values and their spatio-temporal variability.This procedure is repeated until a satisfying solution, i.e. a conditional rainfall field that achieves a reasonable NSE, is found.To enable a reasonable uncertainty estimation the procedure is repeated until a predefined number of potential candidates has been found.In the following, rainfall is used interchangeably with precipitation.
Rainfall runoff model
A simple spatially distributed rainfall-runoff model is used as transfer function to portray the non-linear transformation of spatially distributed rainfall into runoff at catchment outlets.The model is dedicated to describe rainfall-runoff processes in arid mountainous regions, which are mostly based on infiltration excess and Hortonian overland flow.The model is working on regular grid cells in event-based modes.It is parsimonious in the number of parameters, considers transmission losses but has no base flow component.Prestate information at the beginning of an event is neglected since runoff processes mostly start under dry conditions (Pilgrim et al., 1988).
More specifically, only simple approaches known from hydrologic textbooks for the simulation of single rainfallrunoff events (no long-term water balance) are used (Dyck and Peschke, 1983).Effective precipitation Pe(x, t) with location x ∈ D and time t ∈ T is calculated by an initial and constant rate loss model applied on each grid cell which is affected by rainfall.The initial loss I a (x) represents interception and depression storage.If the accumulated precipitation exceeds I a (x) surface runoff may occur, which is re-duced by the constant rate f c (x) throughout an event to consider infiltration.The calculated effective precipitation (or surface runoff) is transferred to the next river channel section considering translation and attenuation processes.Translation is accounted for with a grid-based travel-time function to include the effects of surface slope and roughness.Attenuation is accounted for with a single linear storage unit with recession constant f r (x).Both approaches are applied on grid cells affected by effective precipitation only to fully support spatial distributed calculations corresponding to the spatial extent of the rain field.The properties of several landscape units are addressed by different parameter sets (for I a (x), f c (x), f r (x)) following the concept of hydrogeological response units (Gerner, 2013) (since hydrologic processes are mostly driven by hydrogeology in these regions).Runoff is routed to the catchment outlet by a simple lag model in combination with a constant rate (f t ) loss model to portray transmission losses along the stream channel.The RR model is applied on an hourly time step on regular grids cells of 1 km by 1 km.Parameters are assumed to be known and fixed during the inverse modelling procedure.The RR model is linked to Random Mixing directly and named with the working title NAMarid.
Random Mixing for inverse hydrologic modelling
Random Mixing is a geostatistical simulation approach.It uses copulas as spatial random functions (Bárdossy, 2006) and represents an extension to the gradual deformation approach (Hu, 2000).In the following a brief description of the Random Mixing algorithm is presented.A detailed explanation can be found in Hörning (2016).
The goal of the inverse hydrologic modelling approach presented herein is to find a conditional precipitation field P (x, t) with location x ∈ D and time t ∈ T which reproduces the observed spatial and temporal variability and marginal distribution of P .This field should also honour precipitation observations at locations x j and times t i : P (x j , t i ) = p j, i for j = 1, . . ., J and i = 1, . . ., I, Note that P denotes a spatial field and p denotes a precipitation value within that field.Furthermore, the solution of a rainfall-runoff model using the field P as input variable should approximately honour the observed runoff: where Q t denotes the rainfall-runoff model and q t represents the observed runoff values at time step t.Note that Q t (P ) represents a non-linear function of the field P .
In order to find such a precipitation field P which fulfills the conditions given in Eqs. ( 1) and ( 2), Random Mixing can be applied.Figure 1 shows a flowchart of the corresponding procedure.
Using the given observations p j, i , a marginal distribution G(p) has to be fitted to them.Note that in general any type of distribution function (e.g.parametric, non-parametric, and combinations of distributions) can be used.For the applications presented herein the selected marginal distribution consists of two parts: the discrete probability of zero precipitation and an exponential distribution for the wet precipitation observations.It is defined as follows: with p denoting precipitation values, p 0 is the discrete probability of zero precipitation and λ denotes the parameter of the exponential distribution.Thus the parameters that need to be estimated are p 0 and λ.Then, using the fitted marginal distribution the observed precipitation values are transformed to standard normal: where −1 denotes the univariate inverse standard normal distribution.Note that zero precipitation observations are not transformed to the same value, but they are considered as inequality constraints as described in Eq. ( 4).Hörning (2016).Further note that the transformation of the marginal distribution described in Eq. ( 4) can be reversed via the following: where G −1 denotes the inverse marginal distribution of P and denotes the univariate standard normal distribution.Also, note that W denotes the transformed spatial field while w denotes a transformed observed value within that field.Note that in this approach we assume that the precipitation distribution is the same for each location x and each timestep t.One could use a location and/or time-specific distribution to take spatial or temporal non-stationarity into account; however, this requires a relatively large amount of precipitation observations and/or additional information.
As a next step we assume that the field W is normal, and thus its spatio-temporal dependence is described by the normal copula with correlation matrix c .In general copulas are multivariate distribution functions defined on the unit hypercube with uniform univariate marginals.They are used to describe the dependence between random variables independently of their marginal distributions.The normal copula can be derived from a multivariate standard normal distribution (see Bárdossy and Hörning, 2016b, for details).It enables modelling of a Gaussian spatio-temporal dependence structure with arbitrary marginal distribution.Note that its correlation matrix c has to be assessed from the available observations.If no zero observations are present the maximum likelihood estimation procedure described in Li (2010) can be applied to estimate the copula parameters.If zero values are present a modified maximum likelihood approach has to be used (Bárdossy, 2011).It uses a combination of three different cases (wet-wet pairs, wet-dry pairs, dry-dry pairs of observations) for the estimation of the copula parameters.
As a next step, unconditional standard normal random fields V l with l = 1, . . ., L are simulated such that they all share the same spatio-temporal dependence structure which is described by c of the fitted normal copula.Such fields can for example be simulated using fast Fourier transformation for regular grids (Wood and Chan, 1994;Wood, 1995;Ravalec et al., 2000) or turning-band simulation (Journel, 1974).Here we used the spectral representation method introduced by Shinozuka andDeodatis (1991, 1996).Using the fields V l , the system of linear equations is set up.Note that α l denotes the weights of the linear combination, w j, i = −1 (G(p i, j )) is the transformed precipitation values and V l (x j , t i ) is the values of the random fields at the observation locations.Using singular value decomposition (SVD) (Golub and Kahan, 1965) to solve this equation system leads to a minimum L 2 norm solution.In order to obtain a smooth, low-variance field a L 2 norm α 2 l 1 is required.If no such solution is found, an additional field V L+1 is created, added to the system of linear equation and the system is solved again.Note that with increasing degrees of freedom (i.e. more fields) the L 2 norm of the solution decreases.
Once a solution with an acceptable L 2 norm, i.e. α 2 l 1 is found the resulting field is defined as follows: where M denotes the number of additional fields added to the equation system.Note that W * fulfills the conditions defined in Eq. ( 1); however, it does not fulfill Eq. ( 2) and it does not represent the correct spatio-temporal dependence structure.The next step is to simulate fields U k with k = 1, . . ., K which fulfill the homogeneous conditions, i.e.U k (x j , t i ) = 0. Further all fields U k need to share the same spatiotemporal dependence structure, again described by c .Such fields can be generated in a similar way to W * (see Hörning, 2016 for details).The advantage of these fields U k is that they form a vector space (they are closed for multiplication and addition), thus where λ k denotes arbitrary weights and k(λ) denotes a scaling factor results in a field W λ , which also fulfills the conditions prescribed in Eq. ( 1).The scaling factor is defined as: It ensures that W λ exhibits the correct spatio-temporal dependence structure.Thus, transforming W λ back to P using Eq. ( 5) will result in a precipitation field which has the correct spatio-temporal dependence structure and marginal distribution, and honours the precipitation observations.
To also honour the observed runoff defined in Eq. ( 2) an optimisation problem can be formulated: which minimizes the difference between the modelled and observed runoff by optimising the weights λ k .As these weights are arbitrary they can be changed without violating any of the already fulfilled conditions; thus they can be optimized without any further constraints.If for a given set of fields and weights and after a certain number of iterations N no suitable solution is found, the number K of fields U k can be increased and the optimisation is repeated.A suitable solution is found when the deviation between simulated and observed runoff is smaller than the criterion of acceptance ε (here, 1 − NSE is used).If a suitable solution is found the whole procedure can be restarted using new random fields V l .Thus multiple solutions can be obtained enabling uncertainty quantification of spatio-temporal rainfall fields.
3 Test of the methodology
Synthetic test site
To test the ability of the methodology a synthetic example was designed.The example consists of a synthetic catchment partly covered by rainfall.The synthetic catchment has a size of 211 km 2 with elevations ranging between 100 and 1100 m a.s.l. and homogeneous landscape properties (Fig. 2).
A synthetic rainfall event of 6 h duration with an hourly time step and a maximum spatial extension of 118 km 2 on a regular grid of 1 km by 1 km cell size is used.Rainfall amounts above 20 mm event −1 cover an area of 25 km 2 with maximum rainfall of 36 mm event −1 and maximum intensity of 12 mm h −1 (see Figs. 3 and S1 in the Supplement).Based on this known spatio-temporal rainfall input pattern and RR model parameterisation the catchment response at the surface outlet was simulated and designated as the known "observed" runoff q t (see Fig. 6, blue graph).Furthermore, 10 different cells were selected from the spatio-temporal rainfall patterns to represent virtual monitoring stations of rainfall.They were chosen in a way that the centre of the event is not recorded.They are designated as the known "observed" rainfall P x j , t i at J monitoring stations for T time steps and provide the data basis for interpolation, conditional simulation, and inverse modelling of spatio-temporal rainfall patterns.Figure 4 shows their course in time.Note that virtual monitoring stations 2, 5, 9, and 10 measure 0 mm h −1 rainfall only.Based on these observations the fitted parameters for the marginal distribution (Eq.3) are p 0 = 0.36 and λ = 0.48.The fitted copula for the dependency structure in space and time is a Gaussian copula with an exponential correlation function with a range of 2.5 km in space and a range of 1.5 h in time.In comparison, using the full synthetic dataset a range of 4.5 km in space and a range of 2.5 h in time are estimated.
Common hydrologic modelling approach
At first, hourly rainfall data from virtual monitoring stations were used to interpolate the spatio-temporal rainfall patterns on a regular grid of 1 km by 1 km cell size by using the inverse distance method, which is quite common in hydrologic modelling.Afterwards, the response of the synthetic catchment was calculated by the RR model.Figure 5 shows the interpolated pattern of the event-based rainfall amounts as the sum over single time steps.The pattern looks quite smooth and has only minor similarities with the true pattern in Fig. 3.The maximum rainfall amount per event is equal to the maximum of the observation at virtual station number 8 with 16.2 mm event −1 .Therefore, the extension of a rainfall centre over 20 mm event −1 cannot be estimated.Due to low rainfall intensities, the simulated response of the RR model shows a significant underestimation of the observed runoff, with an NSE value of −0.28 (see Fig. 6, green graph).
Performance of conditional rainfall simulations
The Random Mixing approach was used to simulate 200 different spatio-temporal rainfall patterns conditioned on the virtual rainfall monitoring stations only.Resulting runoff simulations are displayed in Fig. 6.They show a wide range of hydrographs with peak values between 0.19 m 2 s −1 and 4.17 pared to the runoff observation, the timing of peaks is acceptable, but the peak values are underestimated.Only four hydrographs have NSE values higher than 0.7.The corresponding spatial event-based rainfall amounts for the top three runoff simulations regarding the NSE values (a: 0.89, b: 0.78, c: 0.73) is shown in Fig. 7. Their rainfall amounts range between 27.8 and 28.7 mm event −1 , with a spatial extent of 9 to 11 km 2 of rainfall above 20 mm event −1 and a maximum intensity 10.5 to 15.1 mm h −1 .Compared to the observation (Fig. 3), the spatial patterns look similar, at least regarding the spatial location of the event, and cover the maximum intensity.But the rainfall amounts per event as well as their spatial extent is too low.As a consequence, none of the simulated spatio-temporal rainfall fields conditioned at the vir- tual rainfall monitoring stations only are able to match the observed peak value in resulting runoff.
Inverse hydrologic modelling approach
The inverse modelling approach was used to simulate 107 different spatio-temporal rainfall patterns which are conditioned on the virtual rainfall and runoff monitoring stations, and runoff simulation results better than NSE values of 0.7.Afterwards a refinement was carried out by selecting only those simulations with nearly identical runoff simulation results compared to observations.These simulations are characterized by NSE values larger than 0.995.Figure 8 shows the performance of the 20 selected realisations by grey graphs that show only minor deviations during the flood peak range compared to the observation (blue graph).Associated rainfall patterns are displayed in Fig. 9 for six selected realisations by their spatial rainfall amounts per event.Compared to the true spatial pattern (see Fig. 3) none of them reproduce the true pattern exactly, but all of them locate the centre of the event in the same region as the true pattern.This shows that by additional conditioning of spatiotemporal rainfall patterns on runoff observation and consideration of catchment's drainage characteristic represented by the RR model, the rainfall event can be localised and reconstructed in its spatial extent as well as in its course in time (see also Fig. S1).Most probably, if we would sample a large number of rainfall fields conditioned on rainfall observation only, we would find a realisation which matches the runoff observation too.Due to additional conditioning on runoff we find these realisations faster.However, the inference of a three-dimensional input variable by using an integral output response results in a set or ensemble of different solutions.Rainfall amounts of the selected 20 realisations above 20 mm event −1 cover an area of 13 to 25 km 2 with maximum rainfall of 26.7 to 40.4 mm event −1 and maximum intensities of 10.7 to 17.1 mm h −1 .The event-based areal precipitation of the catchment ranges between 98.2 % and 114.7 % of the observation (see Fig. 3). Figure 9 presents spatial rainfall amounts per event for (a) the realisation with the smallest area above 20 mm event −1 and smallest intensity, (b) the realisation with the largest area above 20 mm event −1 , (c) the realisation with the highest intensity and rainfall amount per event, (d) the realisation with the best NSE value in resulting runoff, and (e)-(f) realisations with similar event statistics like the true spatio-temporal rainfall pattern.Compared to the observed pattern (see Fig. 3), the different realisations match the spatial location as well as the shape of the observed pattern very well.However, the spatial patterns of the realisations are not such smooth and symmetric like the constructed synthetic observation.Furthermore, the realisations show some scattered low rainfall amounts, which are not of importance for the hydrograph simulation, since they are addressed by the initial and constant rate losses of the RR model.
Deriving an average rainfall pattern by calculation of the mean value per grid cell over all realisations of the ensemble for each time step, a smoother pattern is obtained, which looks more similar to the true one but has smaller rainfall intensities.Using this mean ensemble pattern for calculating the runoff response leads to an underestimation of the observed hydrograph as shown by the black hydrograph in Fig. 8. Therefore, the ensemble mean of the hydrographs (red line in Fig. 8) is a better representative for the sample than the mean ensemble rainfall pattern.
In addition, data of the virtual monitoring stations (the observation) have been always reproduced and are equal for each rainfall simulation.This means that each realisation reproduces the point observation of rainfall without any uncertainty.Only the grid points between the observation differ within the three-dimensional rainfall field and contain the stochasticity given by rainfall simulations conditioned on the observed values.In this context, the ensemble can be used as a partial descriptor of the total uncertainty.It describes the remaining uncertainty of precipitation if all available data are exploited under the assumption of error-free measurements, reliable statistical rainfall models, and known hydrologic model parameters.
4 Application for real-world data 4.1 Arid catchment test site The real-world example is taken from the upper Wadi Bani Kharus in the northern part of the Sultanate of Oman.It is the starting point for the present study and part of our multi-year research on hydrologic processes in this region.The headwater under consideration is the catchment of the streamflow gauging station of Al Awabi, with an area of 257 km 2 , located in the Hadjar mountain range with heights ranging from 600 m a.s.l. to more than 2500 m a.s.l.The geology of the area is dominated by the Hadjar group, which consists of limestone and dolomite.The steep terrain consists mainly of rocks.Soils are negligible.However, larger units of al- luvial depositions in the valleys are important for hydrologic processes, an issue which is addressed through spatial differences in RR model parameters.Vegetation is sparse and mostly cultivated in mountain oases.Annual rainfall can reach more than 300 mm year −1 , showing a huge variability between consecutive years.Analysis of measured runoff data over a period of 24 years shows that runoff occurred on average only on 18 days year −1 .Figure 10 displays the available monitoring network for sub-daily data.Runoff is measured in 5 to 10 min temporal resolution.Rainfall measurements vary from 1 min to 1 h.Therefore, a temporal resolution of 1 h was chosen for the event under investigation in this study.Figure 11 shows the measurements of the rainfall gauging stations and their altitudes for the rainstorm from 12 February 1999.Most of the rain was recorded on stations with lower altitudes located in the north-west and southeastern part of the catchment.Rainfall interpolation was performed by the inverse distance method, since there was no dependency of rainfall from altitude identifiable for this single heavy rainfall event.Parameters for the inverse modelling approach are p 0 = 0.17 and λ = 0.14 for the marginal distribution (Eq.3).The fitted copula for the dependency structure in space and time is a Gaussian copula with an exponential correlation function with a range of 10 km in space and a range of 1 h in time.
Results and discussion
The real-world data example was performed for the runoff event from 12 February 1999 with an effective rainfall duration of 3 h.The simulated runoff for the interpolated rainfall pattern shows an underestimation of the peak discharge as well as a time shift of the peak arrival time compared to the observation (Fig. 12).Applying the inverse approach by conditioning spatio-temporal rainfall patterns on rainfall and runoff observations, an ensemble of 58 different hydrographs is obtained after refinement, with NSE values larger than 0.9.As shown in Fig. 12, all of these hydrographs (grey graphs) represent the observation well and overcome the time shift.To explain this behaviour, differential maps are calculated which show the difference between the simulated and the interpolated rainfall pattern for each time step (Fig. 13; see also Fig. S2 for comparison of event-based spatial rainfall amounts).It is easy to see that the inverse approach allows for a shift of the centre of the rainfall event from time step 1 to time step 2 and towards the catchment outlet.This results in a faster response of the catchment by its runoff compared to the interpolated rainfall pattern.In general, the obtained ensemble of spatio-temporal rainfall patterns is able to explain the observed runoff without discrepancy in rainfall measurements.Similar to the synthetic example, the ensemble mean hydrograph (Fig. 12, red graph) is a better representative for the sample than the hydrograph based on the mean ensemble rainfall spatio-temporal pattern (black graph).
Summary and conclusions
An inverse hydrologic modelling approach for simulating spatio-temporal rainfall patterns is presented in this paper.The approach combines the conditional random field simulator Random Mixing and a spatial distributed RR model in a joint Monte Carlo framework.It allows for obtaining reasonable spatio-temporal rainfall patterns conditioned on point rainfall and runoff observations.This has been demonstrated by a synthetic data example as well as a real-world data example for single rainstorms and catchments which are partly covered by rainfall.
The proposed framework was compared to the methods of rainfall interpolation and conditional rainfall simulation.Reconstruction of event-based spatio-temporal rainfall patterns has been feasible by the inverse approach, if runoff obser- vation and catchment's spatial drainage characteristic represented by the RR model with spatial distributed travel times of overland flow are considered.As shown by the synthetic example, the rainfall pattern obtained by interpolation did not match the observed rainfall field and runoff.If rain gauge observations do not portray the rain field adequately, a "good" interpolation result in the least-square sense is not a solution of the problem.This is the case in particular for small scale rainstorms with high spatio-temporal rainfall variability and/or rainfall data scarcity due to insufficient monitoring network density.By rainfall simulations conditioned on rain gauge observation only, reasonable spatio-temporal rainfall fields are obtained, but with a wide spread in resulting runoff hydrographs.A large number of simulated rainfall fields is required to find those realisations which match the observed runoff, since the amount of possible conditioned rainfall fields is much higher than the amount of rainfall fields matching point observation and runoff.By applying optimisation, rainfall fields are conditioned on discharge too, and appropriate candidates for spatio-temporal rainfall patterns can be identified more reliably, faster, and with reduced uncertainty.
The inference of a three-dimensional input variable by using an integral output response results in a set of possible solutions in terms of spatio-temporal rainfall patterns.This ensemble is obtained by repetitive execution of the optimisation step within the Monte Carlo loop.It can be considered as a descriptor of the partial uncertainty resulting from spatio-temporal rainfall pattern estimates (under the assumption of error-free measurements, reliable statistical rainfall models, and known hydrologic model parameters).Realisations of the ensemble vary in rainfall amounts, intensities, and spatial extent of the event, but they reproduce the point rainfall observation exactly and yield to similar runoff hydrographs.This allows for deeper insights in hydrologic model and catchment behaviour and gives valuable information for the reanalysis of rainfall-runoff events, since rainstorm configurations leading to similar flood responses become visible.As shown in the example, operating with an ensemble mean is less successful in matching the runoff observation compared to an application of the whole ensemble due to smoothing effects.
The approach is also applicable under data-scarce situations as demonstrated by a real-world data example.Here, the flexibility of the approach becomes visible, since simulated rainfall patterns also allow for overcoming a shift in the timing of runoff.Therefore, the approach can be considered as a reanalysis tool for rainfall-runoff events, especially in regions where runoff generation and formation are based on surface flow processes (Hortonian runoff) and in catchments with wide ranges in arrival times at catchment outlets such as mountainous regions or distinct drainage structures, e.g.urban and peri-urban regions.
Nevertheless, further research and investigations are required.Examples presented in this paper are based on an hourly time resolution and 1 km 2 grid size in space.In particular, for rainstorms in small fast-responding catchments, finer resolutions in space and time are required.Here the limits of the approach in the number of time steps and grid cells need to be explored.Another point is the required amount and quality of observation data as well as statistical model selection to obtain space-time rain fields.Both impact the simulation of rainfall amounts and of patterns by the derived spatial and temporal dependence structure.In these examples Gaussian copulas are used, which might be not a good estimator for the spatial dependency in any case of heavy rainfall.
The proposed framework is a first step that only aims at reconstructing spatio-temporal rainfall patterns under the assumption of fixed hydrologic model structure and parameters.Certainly, hydrologic model uncertainty is of importance.But instead of changing the model to fit the observed discharge, we estimate rainfall fields which fit the model and the discharge by doing reverse hydrology.As such plausible rainfall fields can be identified, the corresponding model and the rainfall field is plausible.Thus, the framework can be applied to proof hypothesis about hydrologic model selection or to explain extraordinary rainfall-runoff events by using a well calibrated, spatial distributed hydrologic model for the catchment of interest.In this context, further research
Figure 1 .
Figure 1.Flowchart of the Random Mixing algorithm for inverse hydrologic modelling.
Figure 2 .
Figure 2. Topography, watershed, and observation network of the synthetic catchment.
Figure 3 .
Figure 3. Rainfall amounts of the synthetic rainfall event.Virtual monitoring stations are marked by crosses.
Figure 4 .
Figure 4. Time series of rainfall intensities at virtual monitoring stations.
Figure 5 .
Figure 5. Interpolated rainfall amounts per event by using data of virtual monitoring stations.
Figure 6 .
Figure 6.Runoff simulations based on simulated spatio-temporal rainfall patterns conditioned at rainfall point observations only (grey graphs) compared to its mean (red graph), runoff observation (blue graph), and simulation based on interpolated rainfall patterns (green graph).
Figure 7 .
Figure 7. Event-based rainfall patterns conditioned at rainfall point observations only for the top three runoff simulations in Fig. 6.
Figure 8 .
Figure 8.Comparison of hydrographs for the synthetic catchment shown by the observed runoff (blue) and rainfall-runoff simulation results based on interpolated rainfall patterns (green), a simulated ensemble of spatio-temporal rainfall patterns conditioned at rainfall and runoff observations (grey) and their mean value (red), and mean ensemble rainfall patterns (black).
Figure 9 .
Figure 9. Selected realisations of spatial rainfall amounts per event with similar performance in resulting runoff, obtained by the inverse modelling approach for simulating spatio-temporal rainfall pattern: (a) realisation with the smallest area above 20 mm event −1 and smallest intensity, (b) realisation with the largest area above 20 mm event −1 (c) realisation with the highest intensity and rainfall amount per event, (d) realisation with the best NSE value in resulting runoff, and (e)-(f) realisations with similar event statistics to the true spatio-temporal rainfall pattern.
Figure 10 .
Figure 10.Real-world case study: catchment of gauge Al Awabi and sub-daily monitoring network for runoff and rainfall.
Figure 11 .
Figure 11.Rainfall amounts and altitudes of rainfall gauging stations from 12 February 1999.
Figure 12 .
Figure 12.Comparison of hydrographs for the real-world catchment shown by the observed runoff (blue) and rainfall-runoff simulation results based on interpolated rainfall patterns (green), a simulated ensemble of spatio-temporal rainfall patterns conditioned at rainfall and runoff observations (grey) and their mean value (red), and mean ensemble rainfall patterns (black).
Figure 13 .
Figure 13.Differential maps of spatio-temporal rainfall patterns for three consecutive time steps (simulation -interpolation).
|
v3-fos-license
|
2019-10-31T09:09:57.407Z
|
2019-10-21T00:00:00.000
|
209947376
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-26732-2_2.pdf",
"pdf_hash": "b893b1ce223fc2770d21aadc64b27bf4e9f2b9f5",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:731",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "8291418773fb7916a46afee544733b37f5dd367b",
"year": 2019
}
|
pes2o/s2orc
|
Introduction to Spherical Elementary Current Systems
This is a review of the Spherical Elementary Current System or SECS method, and its various applications to studying ionospheric current systems. In this chapter, the discussion is more general, and applications where both ground-based and/or satellite observations are used as the input data are discussed. Application of the SECS method to analyzing electric and magnetic field data provided by the Swarm satellites will be discussed in more detail in the next chapter.
Introduction
At high magnetic latitudes, the ionospheric current system basically consist of horizontal currents flowing around 100-150 km altitude, and almost vertical field-aligned currents (FAC) flowing along the geomagnetic field, thus connecting the ionospheric currents to the magnetosphere. The magnitude, spatial distribution, and temporal variations of the horizontal currents and FAC can be estimated from the magnetic field they produce. Over the years, several techniques have been developed for this task, as discussed in various Chapters of this book (see also Vanhamäki and Juusola 2018, and reference therein). The present chapter gives an overall introduction to the The original version of this chapter was revised. Electronic Supplementary Material has been added to this chapter. The correction to this chapter is available at https://doi.org/10.1007/978-3-030-26732-2_13 Spherical Elementary Current System (SECS) method, while Chap. 3 deals with the specific application of the SECS method to magnetic data provided by the Swarm satellite mission.
Mathematically speaking, the elementary systems form a set of basis functions for representing two-dimensional vector fields on a spherical surface. This can, of course, be done in other ways too, e.g., by using spherical harmonic or spherical cap harmonic functions. The main difference is that the elementary systems represent the vector field in term of its divergence and curl, whereas harmonic functions are used to represent the scalar potential and stream function of the vector field. In principle, these methods should be equivalent, but in practice, each has its strengths and weaknesses. As will be seen, advantages of the SECS method include adjustable grid resolution, variable shape of the analysis region and no requirement for explicit boundary conditions.
The chapter begins with a summary of some basic electrodynamic properties of ionospheric current systems and the most commonly used approximations in Sect. 2.2. The 2D SECSs are introduced in Sect. 2.3. Their applications to analysis of two-dimensional vector fields and magnetic fields are discussed in Sects. 2.4-2.7. A one-dimensional variant of the SECS method, applicable to studies of singlesatellite magnetic measurements, is discussed in Sect. 2.9. Some practical issues when applying the SECS method are discussed in Sect. 2.10. Finally, a short overview of some of the studies where the SECS method has been used is given in Sect. 2.11.
An example MATLAB code demonstrating the use of SECS in the specific task of estimating ionospheric equivalent current from ground magnetic measurements is included as supplementary material in the electronic version of the book, including data from the IMAGE (International Monitor for Auroral Geomagnetic Effects 1 ) magnetometer network.
Short Review of Ionospheric Electrodynamics
A short summary of the relevant properties of ionospheric electrodynamics, especially at high magnetic latitudes (i.e., the auroral oval), is given in this section. For a more comprehensive introduction see, for example, Richmond and Thayer (2000). In the context of this chapter, ionospheric electrodynamics is described by the electric field, and the Hall and Pedersen conductivities and currents. Additionally, the magnetic perturbation created by the ionospheric currents is an important quantity in many studies. Thus, the focus is on macroscopic electric parameters, while many interesting phenomena, such as various chemical processes and particle dynamics, are ignored.
In the commonly used thin-sheet approximation (see e.g., Untiedt and Baumjohann (1993)) the ionosphere is assumed to be a thin, two-dimensional spherical shell of radius R at a constant distance from the Earth's center. The thin-sheet approxi-7 mation is justified by the fact that the horizontal currents flowing in the ionosphere are concentrated to a rather thin layer around 100-150 km altitude, where the Pedersen and Hall conductivities have their maxima. Thus the thickness of this layer is small compared to the horizontal length scale of typical ionospheric current systems. However, in some cases, three-dimensional modeling is required (Amm et al. 2008).
Above the ionospheric current sheet there is perfectly conducting plasma, where magnetic field lines are equipotentials, and below is the nonconductive neutral atmosphere. The electric field is assumed to be roughly constant in altitude through the thin current layer. Thus the Pedersen and Hall conductivities can be height integrated into Pedersen and Hall conductances, while the sheet current density J is obtained by similarly height integrating the horizontal part j h of the 3D current j.
In summary, the main electrodynamic variables are: horizontal sheet current density J, field-aligned current density j , horizontal electric field E, magnetic field B and height-integrated Hall and Pedersen conductances Σ H and Σ P . These variables are related through Maxwell's equations, Ohm's law, and current continuity: (2.4) In the last equation, the FAC density j just above the ionospheric current sheet is obtained by integrating the continuity equation ∇ · j = 0 ⇔ ∂ z j z = −∇ h · j h through the current sheet. Equations (2.1)-(2.4) employ the frequently used assumption of a radial magnetic field, so thatê = B/|B| = −ê r at the northern hemisphere. Due to the thinsheet approximation, only the radial component is needed in Eq. (2.1). According to Untiedt and Baumjohann (1993) and Amm (1998), the effect of the tilted field lines is negligible for inclination angles χ > 75 • , which covers the auroral zone. At lower latitudes the inclination of the magnetic field could be taken into account by modifying the Hall and Pedersen conductances in Eq. (2.3) (see e.g., Brekke 1997, Chap. 7.12) and by calculating the FAC as j = ∇ · J/ sin χ .
In a thin-sheet ionosphere the electric field E and horizontal current J are twodimensional vector fields, each of which can be represented by two potentials The function φ E is the usual electrostatic potential and ψ E is related to the rotational inductive part of the electric field (see e.g., Yoshikawa and Itonaga 1996;Sciffer et al. 2004). It is usually assumed that ∇ψ E = 0, but this does not hold in some situations (e.g., Vanhamäki and Amm 2011, and references therein). The current potential φ J is connected to FAC through Eq. (2.4), while ψ J represents a rotational current that is closed within the ionospheric current sheet. The latter part is also related to so called ionospheric equivalent current and ground magnetic disturbance, as discussed in Sect. 2.7.
Elementary Current Systems
In Sect. 2.2, the electric field and current were described in terms of potentials. This kind of representation is very common in many fields of physics, and can be applied by expanding the potential in terms of some basis functions, such as Fourier series, spherical harmonics or spherical cap harmonics (see, for example, Backus 1986, and Chap. 9 in this Book).
However, the fields can equally well be represented in terms of their sources and rotations, that is by their divergence and curl. This approach is used in the elementary system method. It is based on Helmholtz's theorem, which states that any wellbehaved (e.g., continuously differentiable) vector field is uniquely composed of a sum of curl-free (CF) and divergence-free (DF) parts.
Elementary current systems, as applied to ionospheric current systems, were introduced by Amm (1997). Although for historical reasons the name refers to currents, they can be used to represent any two-dimensional vector field. Basically, they represent a localized curl or divergence of the vector field. Such elementary systems can be defined either in spherical or Cartesian geometry, and they are called SECS and CECS, respectively. In this chapter, the spherical variant is used.
In accordance with Helmholtz's theorem, there are two different types of elementary systems: one is DF and the other CF. The spherical elementary systems, shown in Fig. 2.1, are defined in such a way that the CF system has a Dirac δ-function divergence and the DF system a δ-function curl at its pole, with uniform and oppositely directed sources elsewhere. It is easy to show (Amm 1997) that the vector fields have the desired properties of 1 Two-dimensional curl-free (CF) and divergence-free (DF) Spherical Elementary Current Systems (SECS). The CF SECS is shown with associated radial FAC. Adapted from Amm and Viljanen (1999) ∇ × V C F r = 0 (2.10) (2.12) Here, S C F and S DF are the scaling factors of the elementary systems, while R is the radius of the sphere (e.g., ionosphere) where elementary systems are placed. The above formulas are given in a spherical coordinate system (r, θ , φ ), with unit vectors (ê r ,ê θ ,ê φ ), oriented so that center of the elementary systems is at θ = 0. This coordinate system is used in the definition of the elementary system, as the expressions take the most simple form there. In the actual analysis, the elementary systems are rotated to a more suitable coordinate system, such as the geographical or geomagnetic system, as discussed in Sect. 2.5. Using the theory of Green's functions it can be shown (e.g., Vanhamäki and Amm 2011) that the CF and DF SECS form a complete set of basis functions for representing two-dimensional vector fields on a sphere. An individual CF SECS with its pole located at (R, θ el , φ el ) represents a source or sink of a vector field at that point, while a DF SECS represents rotational vector field around that point. Thus, by placing a sufficient number of CF and DF SECS at different locations at the ionosphere, one can construct any two-dimensional vector field from its sources and curls, in accordance with Helmholtz's theorem. In principle, the spatial resolution of the representation depends on the number and distribution of the elementary systems. However, in practical applications the amount of available data is a limiting factor.
Current and Magnetic Field
When using the SECS to represent currents, the DF systems form a rotational current that is closed within the ionospheric current sheet. This part of the current is described by ψ J in Eq. (2.6). The CF systems represent the same part of the current as φ J in Eq. (2.6), and are connected to the FAC via Eq. (2.4). The FACs are assumed to flow radially toward or away from the ionosphere, as illustrated in Fig. 2.1. As mentioned before, this is a reasonable assumption only at high magnetic latitudes. In addition to the δ-function at its pole, each CF SECS is also associated with a uniform FAC distributed all around the globe. However, in practice, the actual FACs are described by the δ-functions. The reasons is that if the analysis area is large enough, the sum of the SECS's scaling factors (i.e., sum or integral of the upward and downward FACs) is expected to be close to zero, so that the uniform FACs of the CF SECS will almost cancel each other.
When observing ionospheric current systems, the measured quantity is almost always the magnetic field produced by the currents. In order to use the SECS in these studies, the magnetic fields produced by the currents in individual CF and DF SECS need to be calculated, including the FAC in the case of CF SECS. Amm and Viljanen (1999) did this calculation for the DF systems, by straightforward (although somewhat tedious) evaluation of the vector potential from the Biot-Savart law. The result is that the magnetic field has only r -and θ -components, given by (2.13) where s = min(r, R)/max(r, R). The magnetic field of the CF system, with associated FAC, is most easily calculated using Ampere's circuit law, following the same reasoning as in Appendix A of Juusola et al. (2006). The important thing is to first convince oneself that, due to symmetries, the magnetic field must have the form B C F = B φ (r, θ )ê φ . After that it is easy to evaluate the circuit law and obtain the field as
Fig. 2.2
Geometry of the coordinate transformation. Elementary system is located at (θ el , φ el and the result is evaluated at (θ k , φ k ). θ is the colatitude of the point (θ k , φ k ) in the coordinate system centered at the elementary system. Adapted from Vanhamäki et al. (2003) It is left as an exercise to the reader to check that Eqs. (2.13)-(2.15) give the correct magnetic field. This is most easily done by verifying that (1) divergence of B C F and B DF is zero, (2) the discontinuity at the ionospheric current sheet (r = R) gives the horizontal current in Eqs. (2.7) and (2.8), (3) elsewhere the curl of B DF is zero and (4) the curl of B C F gives the correct FAC above the ionosphere.
Coordinate Transformations
The fields of individual CF and DF SECS in Eqs. (2.7) and (2.8) and (2.13)-(2.15) are given in a coordinate system that is centered at the SECS pole. Typically, the analysis is done in the geographical or geomagnetic coordinate system, which is now the unprimed system. Assume that measurements at locations (r k , θ k , φ k ), k = 1 . . . K , are available, and place the SECS at various locations (R, θ n , φ n ), n = 1 . . . N , in the ionosphere. In order to use the SECS, the colatitude θ and unit vectors (ê θ ,ê φ ) need to be transformed from the SECS-centered coordinate system to the geographical or geomagnetic system. The radial coordinate and unit vector require no transformation, as they are the same in both systems. This is a straightforward rotation of the coordinate system, but for completeness sake one possible method is presented here. The geometry of the situation is illustrated in Fig. 2.2. According to spherical trigonometry the colatitude θ is given by cos θ = cos θ k cos θ el + sin θ k sin θ el cos(φ el − φ k ).
(2.16) From Fig. 2.2 the unit vectors can be expressed as It is a straightforward exercise in spherical trigonometry to show that (2.20) With these expressions, it is easy to calculate the current or magnetic field at geographical location (r k , θ k , φ k ) that is produced by a SECS located at geographical point (R, θ el , φ el ).
Vector Field Analysis with SECS
In practical calculations, the elementary systems are placed at some discrete grid, and the scaling factors give the divergence and curl of the vector field in the grid cell. In some arbitrary grid cell n, the scaling factors are where da is the area element. This means that the curl and divergence distributed over the grid cell are represented by point sources at the center of the cell. With SECS a vector field (e.g., the ionospheric horizontal current or electric field) is composed of rotational and divergent parts as The composite vector V contains the θ -and φ-components of the vector field V at the grid points r k = (R, θ k , φ k ), (2.24) The vectors S C F and S DF contain the scaling factors of the CF and DF SECS, respectively, at grid points r el Here S DF (r el ) and S C F (r el ) should be interpreted as the average divergence and curl of V over the grid cells, as in Eqs. (2.21) and (2.22). The components of the transfer matrices M 1,2 can be calculated using Eqs. (2.7) and (2.8), as explained in detail by Vanhamäki (2011). Figure 2.3 illustrates how an irrotational potential field can be modeled with just CF elementary systems. In this case the vector S DF in Eq. (2.23) is zero.
A given vector field V could be represented with elementary systems by evaluating the integrals in Eqs. (2.21) and (2.22) over a suitable grid. However, it is often more practical to rewrite Eq. (2.23) as where the CF and DF parts have been combined, Now, the equation can be inverted for the unknown scaling factors contained in the vector S C D . This inverse problem can be solved in various ways, for example, employing singular value decomposition (SVD) of the matrix M 12 . The solution method and possible regularization of the inverse problem (see Sect. 2.10.3) may have some effect on the solution, especially when the matrix is under-determined (more unknowns than measurements). If it is known a priori that the vector field V is either curl-or divergence-free (e.g., the ionospheric electric field is often assumed curl-free), it is only necessary to use one type of the elementary systems, thus reducing the size of the inverse problem by a factor of two. If the vector field V is known globally (e.g., everywhere in the ionosphere), it is completely determined by its curl and divergence. However, if the vector field is specified in only some limited region, it may contain a Laplacian part that has zero curl and divergence inside this region. In a potential representation, such as in Eqs. (2.5) and (2.6), this Laplacian part would be determined by the boundary conditions at the edge of the area where V is known. In the SECS representation, the Laplacian part can be included by placing some elementary systems outside the region of interest. These "external" SECS represent the effect that distant sources (i.e., divergences or curls) have inside the analysis area. Therefore, in regional studies, it is important to make the SECS grid somewhat larger than the area of interest (see Sect. 2.10.1), but it should be remembered that in the outlying areas the SECS representation is no longer unique.
This kind of vector field representation was one of the original uses of the elementary current systems. When Amm (1997) introduced the CECS and SECS ionospheric studies, he was searching for a practical way to decompose vector fields into curlfree and divergence-free parts and also to interpolate the fields in a way that would conserve their curl-free and/or divergence-free character.
Analysis of Ground Magnetic Measurements
An important application of the SECS method has been the estimation of the ionospheric current system based on the magnetic disturbance field it creates at the ground. This is a classical problem in geosciences, and many methods have been developed to tackle it, see e.g., Chapman and Bartels (1940), or Untiedt and Baumjohann (1993), Amm and Viljanen (1999) and references therein. Most of the previously used methods were based on harmonic analysis, where the magnetic field is expanded as a sum of suitable basis functions, for example, spherical harmonics. In the SECS analysis, it is the current system that is expanded in terms of elementary systems, whose amplitude is then fitted to match the measured magnetic disturbance field.
An important practical question is how to separate the disturbance field from the total magnetic field that is measured by magnetometers. Detailed discussion is beyond this review, but we mention that with ground magnetometer data this is usually done by determining some quiet-time reference level and removing it from the data. van de Kamp (2013) present one realization of this method.
The seminal work in ionospheric current studies using SECS analysis was by Amm and Viljanen (1999), who first derived analytical formulas for the magnetic field of the DF SECS and showed how the DF SECS could be used to estimate the ionospheric equivalent current from ground magnetic measurements. They also compared the SECS analysis with more traditional spherical cap harmonic analysis of the magnetic field, and demonstrated the practical advantages of the SECS method.
An important question is the relationship between the ionospheric equivalent current and the real ionospheric current. At high magnetic latitudes the curl-free part of the ionospheric horizontal current, together with associated FAC, does not produce any magnetic field below the ionosphere. Fukushima (1976) showed this by assuming uniform ionospheric conductances, but the result is valid independent of the conductance distribution (Amm 1997). The crucial assumption needed in deriving this result is that the FAC should flow radially. For strictly radial FAC, the ground magnetic disturbance from ionospheric current is produced solely by the divergencefree part, as is evident also in Eqs. (2.13)-(2.15). This is only approximately true even at the auroral zone, and breaks down completely at lower latitudes, where the magnetic field has larger inclination.
When the magnetic field lines are tilted, the FACs and associated horizontal curlfree currents make some contribution to the ground magnetic disturbance. As for example Tamao (1986) showed, this contribution can be reasonably large even at ∼60 • magnetic latitude. Luckily, the ground magnetic field due to tilted FACs is typically spatially smoother than that due to divergence-free currents, as is evident in Fig. 6 by Tamao (1986). Therefore contributions from opposite FACs should readily almost cancel each other, with the remaining magnetic effect being rather small and spatially smooth. For FACs tilted in the north/south direction the ground magnetic field is mostly in the east/west direction, which should show up as north/south equivalent current (see, e.g., Fig. 12 in Untiedt and Baumjohann 1993). Taking all this into account, it should be safe to assume that at high magnetic latitudes the ionospheric equivalent current is approximately equal to the divergence-free part of the actual ionospheric current, possibly apart from a relatively small and smooth north/south directed background current. For a more thorough discussion about the concept of equivalent current see, for example, Sect. 3 in Vanhamäki and Amm (2011) and references therein.
When calculating the ionospheric equivalent current with the SECS method, the horizontal components of the ground magnetic disturbance B G measured by magnetometers at locations r n = (R E , θ n , φ n ) during some time instant are collected into a composite vector (2.29) The unknown scaling factors of the DF SECS located at r el n = (R, θ el n , φ el n ) are collected into another vector as in Eq. (2.26). These vectors are connected by a transfer matrix T, so that The components of the transfer matrix T give the magnetic field caused by each individual unit SECS at the magnetometer sites, and is therefore known and depends only on geometry. For example, T 2,4 gives the y-component (East) of B G at r 1 caused by the SECS centered at r el 4 . Details of calculating the matrix T and inverting Eq. (2.30) for the unknown scaling factors S DF using truncated singular value decomposition are given by Amm and Viljanen (1999) and Pulkkinen et al. (2003b). Once the scaling factors are known, the actual ionospheric equivalent current J eq,ion can be calculated using Eq. (2.8) for each individual DF SECS.
A matlab code included as supplementary material demonstrates the process of calculating the ionospheric equivalent current with the SECS method. Readers are encouraged to study the code and experiment with it. However, the code should not be directly applied to other magnetometer networks, as some parameters may have to be adjusted with changing geometry of the network. This is further discussed in Sect. 2.10. Quick-look plots of the equivalent currents calculated with the SECS method are provided by the Finnish Meteorological Institute. 2
Separation into Internal and External Parts
In the above discussion, only the horizontal part of the ground magnetic disturbance was used, and all the elementary systems were placed at the ionosphere (radius R), thus determining the ionospheric equivalent current. However, due to geomagnetic induction, the observed ground magnetic perturbation also has internal telluric sources, especially during disturbed geomagnetic conditions (Tanskanen et al. 2001). Using all three components of the observed ground magnetic disturbance, it is possible to separate the measured field into internal and external parts, which can be represented by two layers of equivalent currents (e.g., Haines and Torta 1994).
As far as the SECS method is concerned, this kind of separation was first applied by Pulkkinen et al. (2003b). The method is very similar to the above discussion of ionospheric equivalent currents, but in this case all three magnetic field components are used and there are two layers of elementary systems, one in the ionosphere and the other inside the ground.
The measured ground magnetic disturbance B G at magnetometer locations r n = (R E , θ n , φ n ) are collected into a composite vector, (2.31) The external (=ionospheric) DF SECS are located at r el,e n = (R, θ el,e n , φ el,e n ), while the internal DF SECS are placed at r el,i n = (R i , θ el,i n , φ el,i n ). Note that in general there can be a different number of internal and external elementary systems, and they can be located at different latitudes and longitudes. The scaling factors are collected into vectors (2.34) These matrices can be calculated in a completely similar manner as discussed in the previous section, except that in this case, the matrices also include the vertical component of the magnetic field. For solving the unknown scaling factors, Eq. (2.34) is again written as a single matrix equation The equivalent currents can mimic the magnetic field produced by all the currents that are located behind them, as seen from the ground surface. That is, external and internal equivalent currents represent currents that are located either above the ionospheric layer or below the internal layer, respectively. As the induced telluric currents can flow at any depth, and also very close to the surface especially in the highly conductive oceans, in principle it would be best to place the internal equivalent current just below the ground surface. However, that may lead to numerical problems, as the finite grid spacing and the singular nature of the SECS mean that one SECS pole placed close to a magnetometer station would make an unrealistically large contribution to the measurement. As a reasonable compromise between numerical stability and inclusion of near-surface currents, Pulkkinen et al. (2003b) placed the internal current layer at 30 km depth.
Making the separation into internal and external equivalent currents is in principle more accurate than calculating only the external current from Eq. (2.30). However, the separation has also some drawbacks in practice. First of all, the inverse problem becomes less stable compared to the external-only calculation, as the number of observations increase from two to three components per station, but typically the number of SECS is doubled. Furthermore, even though the separation is in principle unique when done globally (and with perfect data coverage), Thébault et al. (2006) demonstrated that in local studies, the internal and external sources mix to some degree. In practical applications, the limited amount of input data lead to further ambiguities in the solution. Thébault et al. (2006) considered spherical cap harmonic analysis of the ground magnetic field, but similar problems are expected also in SECS analysis, although this has not been studied in detail. For these reasons, the telluric contributions have been neglected in many SECS studies, which can be expected to lead to some overestimation of the ionospheric equivalent currents, especially during disturbed conditions when time variations are rapid.
Analysis of Satellite Magnetic Measurements
In the above discussion, the focus was on calculating the ionospheric equivalent current from ground magnetic measurements, which has arguably been the most successful and widely used application of the SECS method. The main limitation is that only the equivalent current, not the whole ionospheric current containing the CF, DF, and FAC parts, can be calculated from ground magnetic data alone. Getting the full current would require some further assumptions about the ionospheric electric field or electric conductivity, as discussed, e.g., by Untiedt and Baumjohann (1993) or Vanhamäki and Amm (2011). This is not a shortcoming of the SECS method, but a general limitation inherent to magnetic fields and currents.
The situation is quite different when there are magnetic measurements from loworbiting satellites, such as CHAMP (CHAllenging Minisatellite Payload, https:// www.gfz-potsdam.de/champ/) or Swarm (Olsen et al. 2013). The satellites pass through the FACs, so their effect dominates the observed magnetic disturbance. The ionospheric horizontal currents, assumed to flow in a thin sheet at E-region altitude, are usually several hundred kilometers below the satellite, and therefore make a smaller contribution to the measured field. As the satellite magnetic data contains information on the FAC, and associated CF current via Eq. (2.4), as well as the DF ionospheric current, the whole current system may be estimated by fitting both CF and DF SECS to the measurements. Therefore satellite data can in principle provide the "real" current distribution.
Often data from only one satellite at any specific region or instant of time are available. Therefore assumptions about gradients perpendicular to the satellite track have to be made, or combined data from several orbits (typically several months or years) are used. Exception to this are the Swarm mission and the AMPERE (Active Magnetosphere and Planetary Electrodynamics Response Experiment Anderson et al. 2014) project. For further discussion of AMPERE, see Chap. 8. The SECS method tailored for Swarm data analysis is discussed in detail in Chap. 3, while analysis of single-satellite passes with assumption of vanishing gradients is discussed in the next section. Juusola et al. (2014) presented a statistical analysis of CHAMP satellite's magnetic data using the SECS method. They first projected all the magnetic measurements into a regular grid in the geomagnetic coordinate system, and then averaged and binned the data with respect to solar wind conditions. The ionospheric current system, including the FAC and both CF and DF parts of the horizontal current, was determined by fitting CF and DF SECS to the gridded magnetic data. Apart from including the CF systems, the approach is very similar to the analysis of ground magnetic data.
The gridded magnetic disturbances measured by CHAMP are collected into a composite vector similar to Eq. (2.31). The CF and DF SECS are placed at selected positions in the ionosphere, and their scaling factors are collected into vectors as in Eqs. (2.21) and (2.22). In general, there can be a different number of CF and DF elementary systems, and they can be located at different latitudes and longitudes. The vectors are connected by transfer matrices T c f and T d f , so that (2.36) These matrices can be calculated in a completely similar manner as discussed in the previous section, using Eq. (2.15) for the CF SECSs and Eqs. (2.13) and (2.14) for the DF SECSs. The fitting problem is again combined into a single matrix equation and solved for the unknown scaling factors. It should be noted that the assumption of perfectly radial FAC used in the CF SECS will lead to some errors when analyzing satellite data. This is most clearly manifested as a slight southward shift in the ionospheric location of the FAC, which is caused by using radial instead of field-aligned mapping from the satellite altitude (typically ∼400 km) to the ionospheric E-region. However, Juusola et al. (2014) estimated that at high latitudes the error was at most 0.9 • , and thus smaller than the latitude resolution of their statistical grid.
When using only satellite data, the ionospheric currents and induced telluric currents can not be separated. The reason is that both current systems are below the satellite, so they produce qualitatively similar magnetic effects. Despite the large distance between the satellite and induced telluric current, in some cases, they may have a large effect on the measured magnetic disturbance, as shown by Vanhamäki et al. (2005). The telluric currents may be approximated by placing a perfect conductor inside the Earth at a certain depth (depending on the ground conductivity), in which case the internal currents would be mirror images of the ionospheric currents. This approach was used by Olsen (1996), but in a statistical comparison of satelliteand ground-based currents Juusola et al. (2016) found it inadequate.
1D SECS
In many situations, data are only available along a single line, and not on a twodimensional area. Typical cases are passes of a single satellite, or a (North-South) chain of magnetometers. In these cases, for single events, some additional assumptions are necessary. For example, the SECS method discussed in Sect. 2.7 is not directly applicable, as it produces reliable results only if measurements are available in a suitably large two-dimensional area.
One approach is to use a "1D assumption", where gradients of the studied parameter (current, electric field, conductances, …) are assumed to vanish in one specific direction. This is identified as the "zero-gradient direction", while the perpendicular direction is the "1D direction" (e.g., along a magnetic meridian). For example, assume that ionospheric current depends only on latitude so that gradients in the longitudinal direction vanish. It should be noted that this zero-gradient direction need not be exactly perpendicular to the satellite path or magnetometer chain, but the angle should still be large enough so that good coverage is achieved in the 1D direction. Also, even though the analysis maybe simplified by assuming some a priori fixed 1D direction (e.g., geomagnetic meridian), there exist methods (e.g., minimum variance analysis, Sonnerup and Scheible 1998) that can be used to determine the optimum The azimuthal horizontal current, shows a sharp shear at colatitude θ 0 direction from the data. Some method should also be used to check how good the 1D assumption is in each specific case, e.g., by estimating how small the gradients in the "zero-gradient direction" actually are, because in reality, the situation is never perfectly one-dimensional.
One-dimensional variants of the CF and DF SECS were defined by Vanhamäki et al. (2003) and Juusola et al. (2006), respectively. In order to distinguish them from the elementary systems discussed thus far, the terms 1D and 2D SECS are used here. The 1D variants can be obtained by placing the poles of the respective twodimensional SECS around a circle at a constant latitude θ 0 , essentially integrating over the position of the 2D SECS's poles (Vanhamäki et al. 2003). The resulting current systems are The 1D SECS are illustrated in Fig. 2.4 and may look deceptively similar to the 2D SECS introduced in Sect. 2.3. However, the crucial difference is that the 1D SECS are defined in the global coordinate system (often geographical or geomagnetic), where the 1D direction is in the meridional plane (all azimuthal gradients vanish). Therefore Eqs. (2.37) and (2.38) have no prime in θ or φ.
The 1D CF SECS has a ring of δ-function divergence at colatitude θ 0 , with uniform and opposite divergence elsewhere. Similarly, the 1D DF SECS has a band of δ-function curl, compensated by uniform curl elsewhere. This is actually an alternative way to define the 1D SECS and to derive Eqs. (2.37) and (2.38). Similar to the general 2D SECS, the CF and DF 1D SECS are basis functions for any continuously differentiable vector field on a sphere, with vanishing gradients in the azimuthal direction. By using several 1D SECS with different amplitudes and different "critical co-latitudes" θ 0 , any such vector field can be constructed.
The magnetic field of the 1D DF systems (when used to represent currents) was calculated by Vanhamäki et al. (2003). With s = min(r, R)/max(r, R) and defining two auxiliary functions the components can be written more compactly as Here, P l and P 1 l are the unnormalized 0th-and first-order-associated Legendre polynomials. The magnetic field of the 1D CF SECS (with associated radial FAC) was calculated by Juusola et al. (2006) using Ampere's law: The 1D SECS are used in a completely analogous way to the 2D systems discussed in Sect. 2.7. For ground magnetic analysis, the B θ -component (southward in the chosen coordinate system) can be used to calculate the ionospheric equivalent current, which in a 1D situation has only φ-component (as 1D current in θ -direction can not be completely divergence-free). Alternatively, both the B r -and B θ -components can be used for the internal/external separation. The B φ -component should be much smaller than the other two, which can be used as one check of the quality of the 1D assumption.
In satellite applications, the southward current J θ is computed from the eastward magnetic disturbance by fitting 1D CF SECS to the data. The eastward divergencefree current is associated with magnetic disturbances in the radial-and θ -directions. If only B r is used in fitting the 1D DF SECS, the measured B θ may be compared to the magnetic disturbance calculated from the fitted DF SECS in order to estimate how good the 1D assumption is. This line of reasoning was applied by Juusola et al. (2007), who used it to search for the best 1D direction by allowing the North Pole of the coordinate system to move. In case they did not find good enough agreement in B θ for any North Pole location, the event was considered 2D and removed from analysis. It is better to use B r in the fitting and B θ in the checking rather than the other way round, as the horizontal component is more easily affected by 2D structures in the FAC. However, due to non-radial FAC, it would be even better to use field-aligned magnetic disturbance in fitting the 1D DF SECS. That can be done by taking the appropriate linear combination of the r -and θ -components.
Some Practical Considerations
In this section, several practical issues related to the application of the SECS methods are considered. Some of them are also discussed and solved in the example code that is included as supplementary material in the book. However, issues such as grid selection and regularization of the matrix inversion depend on the geometry of each situation, and must be adjusted for each magnetometer network.
Grid and Boundary Effects
The SECS method does not require any explicit boundary conditions, even when applied to regional studies. This is different from potential representations of the electric field or current, like Eqs. (2.5) and (2.6), which require explicit boundary information. In contrast, in the SECS analysis, there is an implicit condition that the vector fields are smooth and source-free outside the analysis area. The CF and DF SECS represent all sources of the vector field, so in regions where there are no SECSs both the curl and divergence must vanish. Of course, explicit boundary conditions may be added using virtual data points at the edges of the analysis area, requiring that the vector field has a certain value at these points.
As mentioned in Sect. 2.6, in order to minimize boundary effects caused by the implicit boundary conditions and the possible presence of a Laplacian field, the SECS grid should be somewhat larger than the area of interest. This is illustrated in Fig. 2.5, where a given vector field shown in panel (a) is divided into CF and DF parts using Eq. (2.27). The original vector field was constructed from a curl-free part with sources inside the shown area and a divergence-free part with sources outside, see panels (b-d). In this case, the data region is the area where the vector field shown in panel (a) is given. The SECS grid used in the analysis is the colored area in panels (e-d), where also the data region is shown as a black rectangle. Note that although this example was done with Cartesian Elementary Current Systems (CECS), the same principle holds for SECS.
When the given vector field in panel (a) is decomposed into DF and CF parts using elementary systems, the local curl-free part is correctly represented in terms of CF systems inside the area where the vector field was originally specified. This is seen (2007) by comparing the estimated scaling factors shown in panels (e-f) with the model CF scaling factors that are inside the black rectangle in panel (d). The estimated CF scaling factors inside the data region agree very well with the model, while the estimated DF scaling factors are nearly zero. However, the remote divergence-free part gets represented in terms of both CF and DF systems located just outside the data region. This is seen by comparing the areas outside the black rectangle in panels (d-f): In the model there are only DF CECS outside the black rectangle, but in the fit results both CF and DF CECS have nonzero amplitudes there. This demonstrates that in general, outside the data region, the decomposition is no longer unique, and a (locally) Laplacian field may equally well be caused by either remote curls or divergences, or some combination of them. In global studies, there is no such fundamental ambiguity, because the Laplacian field must vanish or be physically unreasonable. In the case of Fig. 2.5, the Laplacian field corresponds to the remote currents shown in panel (c), caused by the DF CECS located outside the data region in panel (b).
In summary, the division of local field into CF and DF parts is in principle unique, but remote fields, whose sources are outside the data region, can not be decomposed in a unique way. This is not a limitation of the elementary system method, but similar ambiguities (related to boundary conditions) would appear also in potential representations like Eq. (2.6). Finally, it should be kept in mind that in practical applications data availability and quality are often serious limiting factors. Measurements are rarely as extensive, detailed and noise-free as the model field shown in panel (a) of Fig. 2.5.
The recommendation is to make the SECS grid somewhat larger than area of interest. Figure 2.6 illustrates typical SECS and output grids used in the calculation of equivalent currents with data from the IMAGE magnetometer network. Role of the outlying SECS is to provide an equivalent representation of distant current systems, that do not have sources (in this case curls) directly above the magnetometer network.
Singularities
The elementary systems defined in Eqs. (2.7) and (2.8) are unfortunately singular, as there is divergence at the SECS pole where θ = 0. Consequently, the magnetic field of a DF SECS given in Eqs. (2.13) and (2.14) has a singular point at (r = R, θ = 0), while the CF SECS's field in Eq. (2.15) is singular along the line (r ≥ R, θ = 0). These singularities should be kept in mind, as they may cause numerical problems.
In many applications, it is sufficient to select the SECS grid carefully, so that there is no need to evaluate the fields (either the SECS's vector field or magnetic field) too close to singular points. This is usually the case, e.g., in the calculation of equivalent currents from ground magnetic data, discussed in Sect. 2.7 and demonstrated in the example code. If the vertical separation between the SECS layers and the magnetometers is large enough, the singularities in the magnetic field do not matter. Similarly, the resulting equivalent current vectors can be calculated at the midpoints between the SECS locations, as illustrated in Fig. 2.6.
However, in some applications it is necessary to calculate the fields near the singularities. In this case, the elementary systems in Eqs. (2.7) and (2.8) may be modified to When α = cot 2 (θ 0 /2), the vector fields are continuous at θ = θ 0 . Moreover, the δ-function source at the elementary system's pole is now spread uniformly inside a spherical cap of width θ 0 . In a similar way the 1D SECS can be redefined so that the divergence or curl that in Eqs. (2.37) and (2.38) is a δ-function at colatitude θ 0 is uniformly spread to a spherical zone θ 0 − Δ ≤ θ ≤ θ 0 + Δ. Inside this zone the 1D DF SECS has current density cos θ 0 cos Δ − (1 − sin θ 0 sin Δ) cos θ sin θ 0 sin Δ sin θê φ , (2.45) while outside it is the same as in Eq. (2.38). The current density of a 1D CF SECS has the same expression, but in theê θ direction. Assuming radial FAC, the magnetic field of the modified CF SECS can be calculated using Ampere's law, as before. In fact, for a general ionospheric curl-free current J c f (θ, φ), with (∇ × J c f ) r = 0, and radial FAC, the magnetic field is This general result is evident from the magnetic field and current of CF SECS, given in Eqs. (2.15) and (2.7), respectively. Remember that the CF SECS form a complete set of basis functions for curl-free vector fields. However, the result can also be verified by checking that the magnetic field is divergence-free and gives the correct current distribution via Ampere's law. Equation (2.46) forms the basis for many analysis techniques for satellite magnetic data, including the analysis of AMPERE data, discussed further in Chap. 8. In contrast, it is quite unlikely that the magnetic field of the modified nonsingular DF SECS could be calculated in a closed form. Possibly a series expansion could be derived using the same methods as in Vanhamäki et al. (2003). However, in the attached example code this problem is simply ignored: Equation (2.44) is used for the vector field, while the magnetic field is calculated using Eqs. (2.13) and (2.14). This is slightly inconsistent, but does not appear to affect practical applications.
The effects of the singularities can be further reduced in a rather straightforward manner by subdividing the SECS into smaller units. This is not the same as making the original SECS grid finer, as that would increase the number of scaling factors. Rather, if one SECS with scaling factor S n is normally placed into a grid cell n, then the cell is divided into N equal parts and SECSs with amplitudes S n /N are located into each one. This way the size of the system matrix in Eq. (2.27) or (2.30) stays the same, but the matrix elements are calculated as sums of sub-elementary systems placed at different corners of the original grid cells. This does not completely remove the singularity, but reduces it into a smaller area, which can be either handled by using the redefined nonsingular SECS in Eqs. (2.43) and (2.44), or ignored completely. For example, if the original grid cell is divided into 100 sub-cells, then removing the one sub-cell where the calculation point is located should amount to roughly 1% error.
Inversion Regularization
When applying the SECS method, matrix equations such as Eq. (2.27) or (2.30) need to be inverted. Often these are either under-determined (more unknown scaling factors than measurements), or otherwise ill-conditioned. In either case, direct attempt to invert the equation will lead to nonsensical results. In this kind of situation, the problem requires regularization, either by adding some constraints or assumptions about the solution.
There are several possible methods to deal with these situations, but the traditional method of choice in SECS analysis has been the Singular Value Decomposition (SVD, see, e.g., Press et al. 1992, Sect. 2.6). In SVD the system matrix, e.g., T in Eq. (2.30), is decomposed into a product of three matrices where U and V are unitary matrices, S is a diagonal matrix containing the singular values (nonnegative, arranged from largest to smallest) and * denotes conjugate transpose. In the case of Eq. (2.30), the rows of V * represent different, mutually orthogonal configurations of the SECS scaling factors, while the columns of U give the corresponding (also mutually orthogonal) magnetic field configurations at the magnetometer stations. In some sense the corresponding singular values in S indicate how distinguishable these modes are in the magnetic field. A large value S n,n means that the corresponding magnetic field configuration is easy to find in the data, while those with small values can be lost in the noise. Thus SVD may be used to locate and remove the ill-conditioned parts of the system matrix, making the inversion numerically stable. In practice Eq. (2.30) is inverted as where σ is a diagonal matrix with elements σ n,n = 1/S n,n if S n,n > εS 1,1 0 otherwise (2.49) Here, ε is a parameter that determines the cut-off point for small singular values, with respect to the largest value S 1,1 . The important question is how to choose ε. Too small a value will lead to problems with noisy data (e.g., spurious structures appearing in the solution), while too large value means that good data are rejected. Perhaps the only sure way is to test the analysis with simulated data, where the correct answer is known, and try different ε-values. In these tests, it is important to use realistic models and to add a realistic amount of noise to the simulated data. This kind of ε-optimization has been done, e.g., by Weygand et al. (2011) andVujic andBrkic (2016).
The SVD approach seems to work well in practice, but there are also other possible ways to regularize the inversion problem. One could add extra constraints to the system matrix by demanding that the spatial gradient of the SECS scaling factors must be as small as possible. Readers are encouraged to consider and test alternatives to the SVD.
Tilted Field Lines
When representing ionospheric currents with SECS, the FAC is connected to the CF systems. In the present formulation, the FAC are assumed to flow radially, as shown in Fig. 2.1 and assumed in deriving Eq. (2.15). As noted in Sect. 2.4, this assumption is a reasonable approximation only at high magnetic latitudes, where inclination of the magnetic field is large. At lower latitudes, the field lines are noticeably tilted, and Eq. (2.15) becomes an increasingly worse approximation.
However, in principle, this is a problem only when analyzing satellite magnetic measurements. The CF and DF systems still form a basis for representing horizontal vector fields (including the horizontal current) at middle and low latitudes, and the ground magnetic field can still be represented in terms of equivalent currents. Unfortunately, interpretation of the ionospheric equivalent current at lower latitudes is more problematic, as it equals the divergence-free part of the real current only at high magnetic latitudes.
In satellite analysis, one can try to correct small errors caused by the radial/tilted discrepancy, e.g., by introducing a "forbidden zone" between the assumed and actual locations of the FAC at satellite altitude (see Fig. 3 in Juusola et al. 2006), and by shifting the resulting FAC and curl-free current slightly poleward by the amount the field line moves between the satellite altitude and ionospheric E-layer (e.g., Juusola et al. 2016). However, these approximate corrections are reasonably accurate only at high magnetic latitudes, where the field lines are almost vertical.
The CF systems could be improved by assuming a more realistic geometry for the FAC. At high and middle latitudes it might be sufficient to model the FAC as semi-infinite line currents that are oriented along the magnetic field. There is a closedform analytical expression for the magnetic field of such a line current, so the 2D CF SECS could be redefined by replacing the semi-infinite radial line current at the pole (the δ-function current) with a tilted one. There is no pressing need to redefine the uniform radial FAC, as those should mostly cancel when summing several different 2D CF SECS with different amplitudes. For even lower latitudes the semi-infinite line currents should probably be replaced with FAC flowing along the actual magnetic field lines, or at least along a dipole field. For the 1D CF SECS the horizontal current may be redefined so that it is antisymmetric between the (geomagnetic) hemispheres, Assuming that the FAC flows along dipole field lines between conjugate points, the magnetic field can be calculated using Ampere's law (Juusola et al. 2006;Deguchi 2014), Here, θ I = arcsin R r sin θ is the colatitude mapped along a dipole field to the ionosphere. One could also consider other modifications, where the FAC would be terminated at the equatorial plane (Deguchi 2014). They would have the advantage that the current system is not forced to be anti-symmetrical between the hemispheres. For the 2D CF SECS, these non-radial modifications have not been investigated.
Equivalent Current as a Proxy for FAC
As mentioned in Sect. 2.7, the divergence-free equivalent current may be calculated using only ground magnetic data. Thus, strictly speaking, no information is available about FAC. However, it is well known that certain patterns in the equivalent current are good indicators of FAC (e.g., Untiedt and Baumjohann 1993). These estimates can be made more formal by noting that under certain conditions the curl of the equivalent current is directly proportional to the FAC (see, e.g., Amm et al. 2002).
First of all, assume that the equivalent current is equal to the divergence-free part of the actual ionospheric current. This should be valid at high magnetic latitudes, as discussed in Sect. 2.7, although distortions created by internal induced currents and magnetospheric current systems may cause small deviations. More crucially, further assume that the Hall to Pedersen conductance ratio α = Σ H /Σ P is spatially constant and that conductance gradients are perpendicular to the electric field. Under these assumptions j = −(∇ × J eq ) r /α, which is easy to verify by comparing the curl and divergence of ionospheric Ohm's law in Eq. (2.3).
This kind of reasoning has been used from time to time (e.g., Amm et al. 2002;Juusola et al. 2009;Weygand and Wing 2016), but it should be kept in mind that this relation is only approximate and relies on assumptions that are not generally valid. Therefore (∇ × J eq ) r should be only considered as a proxy for FAC.
How SECS Have Been Used
As mentioned in Sect. 2.6, the elementary systems, as used in ionospheric studies, were originally introduced by Amm (1997) in order to optimally interpolate vector fields and to divide them into CF and DF parts. Since then the SECS method has found many other applications, most prominently in the analysis of satellite or ground-based magnetic measurements. The method to calculate ionospheric equivalent currents from ground-based data was developed by Amm and Viljanen (1999), as discussed in Sect. 2.7. It was extensively tested and expanded to include the internal/external separation by Pulkkinen et al. (2003a) and Pulkkinen et al. (2003b), while Vanhamäki et al. (2003) introduced the 1D variant for ground-based analysis. Since then the method has been used in numerous studies, especially with the IMAGE magnetometer network.
Other research groups have adapted the SECS method. For example McLay and Beggan (2010) applied the method to very sparse magnetometer arrays in order to interpolate the external magnetic disturbance field over large distances. Weygand et al. (2011) used the ground-based SECS method to calculate equivalent currents over North America and Greenland, by constructing an irregularly shaped grid for the elementary systems. They also carefully validated and optimized the inversion method by using simulated measurements based on a known ionospheric current model. Instead of calculating equivalent currents, Vujic and Brkic (2016) used the SECS method to construct a regional model of the crustal magnetic field using data from repeat stations and ground survey sites around the Adriatic Sea.
Satellite applications of the SECS method were developed by Juusola et al. (2006) and Juusola et al. (2014) for the 1D and 2D cases respectively, as described in Sects. 2.8 and 2.9. Juusola et al. (2007) carried out a large statistical study of the ionospheric current system by analyzing 6112 individual CHAMP passes with the 1D SECS method. To our knowledge the 2D SECS analysis of gridded and averaged CHAMP measurements by Juusola et al. (2014) was the first study where the 2D ionospheric current system, both CF and DF horizontal currents as well as FAC, was directly estimated from satellite magnetic data. Amm et al. (2015) developed a tailored SECS-based method for analyzing electric and magnetic data from the Swarm multi-satellite mission. This application is discussed in detail in Chap. 3.
Apart from magnetic data analysis, the SECSs can be used as basis functions for representing general vector fields and potentially transforming differential and integral equations into algebraic ones. This is very similar to using spherical harmonic functions in solving differential equations. For example, Vanhamäki et al. (2006) and Vanhamäki (2011) have used the elementary systems for solving ionospheric induction problems starting from Ohm's law and Maxwell's equations. Meanwhile, Vanhamäki and Amm (2007) introduced a new, local variant of the KRM (Kamide-Richmond-Matsushita) method (Kamide et al. 1981) for calculating the ionospheric electric field from ground magnetic data and estimated ionospheric conductances. In these applications, the elementary systems are used to transform the partial differential equations into matrix equations, which can be solved much more easily.
Finally, Amm et al. (2010) used the SECS method for local analysis of the ionospheric plasma convection (or electric field) measured by the SuperDARN radars. This application is very close to the original purpose of Amm (1997), as here the SECS method was used to combine and interpolate/extrapolate the radar line-of-sight velocity measurements into a divergence-free map of the plasma convection. The main advantages over the standard SuperDARN analysis (Ruohoniemi and Baker 1998) is that the SECS method can be used locally, relies only on measured data without any underlying statistical model, and does not require any explicit boundary conditions. formed. Colin Waters provided a large number of comments, which improved the text considerably. The authors thank the institutes who maintain the IMAGE magnetometer array. IMAGE magnetometer data are available at http://www.space.fmi.fi/image. The editors thank Akimasa Yoshikawa for his assistance in evaluating this chapter. This work was supported by the Academy of Finland project 314664.
|
v3-fos-license
|
2018-12-10T23:50:25.070Z
|
2012-12-02T00:00:00.000
|
121516444
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://sljastats.sljol.info/articles/10.4038/sljastats.v12i0.4972/galley/3966/download/",
"pdf_hash": "6f5fbfc5c785bcb91861d21228eef1d6f7000729",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:734",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "6f5fbfc5c785bcb91861d21228eef1d6f7000729",
"year": 2012
}
|
pes2o/s2orc
|
Inference for Diffusion Processes using Combined Estimating Functions
A class of martingale estimating functions provides a convenient framework for studying inference for nonlinear time series models. Further, when information about higher order conditional moments of the observed process is available, the estimation based on combined estimating functions becomes more informative. In this paper, a general framework is developed for estimating parameters of diffusion processes with discretely sampled data using combined estimating functions. The approach is used to study parameter estimation for diffusion models for asset pricing including the Black Scholes model, the Vasicek model, and the Cox-Ingersoll-Ross (CIR) model. Closed form expressions for the gain in information are also discussed in some detail.
Introduction
For nonlinear time series models, Chandra and Taniguchi [1], Bera et al. [2], Merkouris [3], Ghahramani and Thavaneswaran [4], and more recently Liang et al. [5] among others have studied inference using estimating functions.For discretely sampled diffusion-type models, parameter estimation using estimating functions has been studied in Bibby and Sørensen [6], Sørensen [7], and Bibby et al. [8].However, additional assumptions were made and constraints were imposed to obtain the estimates.Moreover, information issues related to the estimating function approach have not been sufficiently addressed in the literature.In this paper, we study combined martingale estimating functions and show that the combined estimating functions are more informative when the conditional mean and variance of the observed process depend on the same parameter of interest.We then apply our approach to discretely sampled observations from diffusion models.
This paper is organized as follows.The rest of Section 1 presents the basics of estimating functions and information associated with estimating functions.Section 2 presents the general model framework for discretely sampled observations from a continuous process, and presents the form of the optimal combined estimating function.In Section 3, the theory is applied to three different diffusion models that are widely used in asset pricing.
Suppose that { , h h y y θ be specified q -dimensional vectors that are martingales.We consider the class M of zero mean and square integrable p -dimensional martingale estimating functions of the form g θ which maximizes, in the partial order of nonnegative definite matrices, the information matrix ( ) and the corresponding optimal information reduces to [9]).It follows from Lindsay ([10], page 916) that if we solve an unbiased estimating equation ( ) n = g θ 0 to get an estimator, then the asymptotic variance of the resulting estimator is the inverse of the information n g I .Hence the estimator obtained from a more informative estimating equation is asymptotically more efficient.
Estimating Function Approach for a Discretely Sampled Continuous Stochastic Process
Assume that a real-valued continuous-time process { } ( ) E ( ) | ,and ( ) E ( ) | , where ( For the general model in (2.1)- (2.4), in the class of all combined estimating .functions of the form ( ) (a) the optimal estimating function is given by ( ) , where , , 1 1 and ; (2.9) Proof.Proof of Theorem 1 is similar to that of Theorem 2.1 in Liang et al.
Examples
In the three examples provided in this section, we assume that t W is a Wiener process.
Geometric Brownian Motion with Volatility as a Function of Drift
Consider the Black and Scholes model (Black and Scholes [11]) of the form ( ) .
We estimate the unknown parameter θ appearing simultaneously in the conditional mean and variance.The first four conditional moments of , The optimal estimating function based on the martingale difference t m is given by ( ) which gives an estimator for θ of the form Similarly, the optimal estimating function based on the martingale difference t M is given by The corresponding information associated with * ( ) m θ g and * ( ) , and ( ) .
It follows from Theorem 1 that the optimal combined estimating function based on t m and t M has the form ( 1) ( ) 1 nh e e e e θ θ θ which also approaches ( ) ( ) / / ( ) , , , ,
Ornstein-Uhlenbeck Model
, the martingale differences are derived as 1 , ) The optimal estimating functions based on the martingale differences t m and t M are respectively given by Moreover, the information matrices associated with * ( ) m g θ and * ( )
I
It follows from Theorem 1 that the optimal combined estimating functions based on t m and t M for α , µ and 2 σ have the following forms: The above estimating functions are set equal to zero and can be solved simultaneously to obtain the estimators for α , µ and 2 σ as ( 1)
Based on t m and t M , the optimal combined estimating function is given by ( ) ) )
|
v3-fos-license
|
2023-02-18T14:50:36.836Z
|
2018-04-03T00:00:00.000
|
256952683
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-018-23855-9.pdf",
"pdf_hash": "72bb8bc834177c4ef94158cec5efc3ea413a1a13",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:738",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "72bb8bc834177c4ef94158cec5efc3ea413a1a13",
"year": 2018
}
|
pes2o/s2orc
|
Stochastic Modeling of Radiation-induced Dendritic Damage on in silico Mouse Hippocampal Neurons
Cognitive dysfunction associated with radiotherapy for cancer treatment has been correlated to several factors, one of which is changes to the dendritic morphology of neuronal cells. Alterations in dendritic geometry and branching patterns are often accompanied by deficits that impact learning and memory. The purpose of this study is to develop a novel predictive model of neuronal dendritic damages caused by exposure to low linear energy transfer (LET) radiation, such as X-rays, γ-rays and high-energy protons. We established in silico representations of mouse hippocampal dentate granule cell layer (GCL) and CA1 pyramidal neurons, which are frequently examined in radiation-induced cognitive decrements. The in silico representations are used in a stochastic model that describes time dependent dendritic damage induced by exposure to low LET radiation. Changes in morphometric parameters, such as total dendritic length, number of branch points and branch number, including the Sholl analysis for single neurons are described by the model. Our model based predictions for different patterns of morphological changes based on energy deposition in dendritic segments (EDDS) will serve as a useful basis to compare specific patterns of morphological alterations caused by EDDS mechanisms.
morphology following X-rays 14 , γ-rays 15,16 and proton irradiation [17][18][19] are observed to persist for at least 30-42 days after exposure and are shown to be correlated with impairments in episodic and spatial memory retention 19 .
Dendritic arborization patterns have an impact on the function and connectivity of neurons, capable of affecting the integration of inputs and propagation of signals. Formation of the dendritic tree is driven by the dynamics of elongation, branching and retraction 24 that include many cellular and molecular mechanisms that have been identified as regulators of dendritic growth and branching patterns 25 . Computer simulation of dendritic arborization pattern is a useful approach to discern the role of structural changes in producing functional deficits in the brain. Several mathematical and stochastic growth models have been developed to generate branching pattern variation for different types of neuron 24,[26][27][28][29][30][31][32] . There are also existing simulation softwares 33,34 and open-source resources 35 that can be used to generate in silico neurons.
In this paper, we develop a novel predictive model that characterizes the time dependent neuronal dendritic degradation caused by exposure to low LET radiation. Computer simulated mouse hippocampal dentate granule cell layer (GCL) and CA1 pyramidal neurons, which are frequently examined in radiation-induced cognitive detriments, are first generated using simple stochastic growth models that follow the elementary rules of dendrite development 24,26,27,36,37 and adopt specifications that manifest neuron morphometric parameters reported in rodent experimentation. We assume that energy deposition in dendritic segments (EDDS) is spatially random for low LET radiation with the number and size increasing with absorbed dose. Thus, radiation-induced changes in neuronal morphology expressed as reductions in total dendritic length, number of branch points and branch numbers can be obtained using a probabilistic model. This model is used to determine if a given branch segment would be damaged and a mathematical model of damaged segment kinetics represented by ordinary differential equations is used to determine whether the number of damaged segments would be eventually "snipped", a term devised to distinguish this "event" from the neurobiological process of dendritic pruning. With this model, we evaluated structural changes of a single neuron. Results for a population of neurons are modeled by considering a correction for the fraction of cell loss, which increases with radiation dose.
Results
Computer simulated mouse hippocampal neurons. In our dendritic growth model shown in Fig. 1A, cylindrical branches are grown stochastically from the neuron cell soma. An initial segment radius of 3 μm is used and each segment step of twice the radius (cylindrical aspect ratio 1:1) can either undergo elongation or branching. Simple stochastic dendritic growth models have used different branching probabilities: a constant probability 24 , a probability as a function of branch length or the distance grown from the soma or previous branch point 26 , or a probability dependent on branch order and number of segments 28 . We adopted the branching probability as a function of branch length 26 but used a varying parameter, α, which represents a maximum branching probability 24,28 , dependent on branch order as a means to be consistent with the reported experimental morphometric parameters in mouse hippocampal neurons. In addition, neuronal self-avoidance is considered such that when a growing segment intersects an existing branch, that growing segment is retracted back to its branch point and a new direction will be randomly selected for the growing segment. Figure 2 shows the computer simulated representations of hippocampal neurons for young adult mice (age of 1 to 4 months) along with their morphometric parameters derived from 10 generated neurons. Granule cell layer (GCL) neuron parameters in Fig. 2A indicate a mean total dendritic length of 926.10 ± 127.14 μm, mean branch length of 132.3 ± 50.9 μm, mean number of branch points of 12.9 ± 3.5, mean branch number of 26.7 ± 7.8 and mean bifurcation angle of 56.02 ± 4.03° for in silico neurons, which are all comparable to the reported experimental morphometric parameters in young adult mouse hippocampal granule cell neurons: total dendritic length = 1298 ± 517 μm (NeuroMorpho.org ID numbers: NMO_06175, NMO_06176) [38][39][40] , mean branch length = 82 ± 11 μm 38,39 , number of branch points = 7 ± 1 40 , branch number = 18 ± 5 (NeuroMorpho.org ID numbers: NMO_06175, NMO_06176) 38,39 and mean bifurcation angle = 57.27 ± 5.70°. The graphs of branch number and mean branch length per branch order, as well as Sholl analysis, have "bell curve" shapes similar to the ones reported by Becker, et al. 41 .
Radiation-induced alterations in neuronal dendritic structure. Dendritic damages caused by exposure to low LET radiation are conveyed by Sholl analysis. Our model considers the spatial dependence of the snips for a given radiation dose and Monte-Carlo trial, which lead to predictions of the reductions in total dendritic length, number of branch points and branch numbers. For a dendritic branch with more than one snip site, surviving segments and end-point branches are determined by the snipped-segments closest to the soma on the tip-to-soma direction pathway, as illustrated in Fig. 1B.
Our model of radiation-induced dendritic damage has two components: (1) a probabilistic model that evaluates if a given branch segment would be damaged by radiation exposure based on the EDDS, and (2) a mathematical model of damaged segment kinetics. For the first component, every segment in each dendritic branch of the computer simulated neuron is assessed if it is damaged using a probability function that is dependent on the EDDS and neuronal segment volume, such that high radiation dose and small segment volume would result in a high damage probability. We define a parameter D d that represents a characteristic dose where 37% of the segments are undamaged and is a function of segment volume defined by the Hill-type equation. Supplementary Figure S1 shows the effects of varying different parameters on D d and damage probability (P d ). We decided to utilize a Hill function apparent constant of K = 0.01 because this value gives a varying radiosensitivity for a 0.2 μm to 0.5 μm segment radius that corresponds to the 4 th -7 th branch order and a constant radiosensitivity for segments found in the 1 st -3 rd branch order. Other Hill function apparent constants, D m and Hill coefficient (η), are selected based on the value that provides the best fit with the experimental data (as illustrated in Supplementary Figure S2). Table 1 shows the summary of parameters used for both GCL and CA1 pyramidal neurons (apical and basal).
For the second component of the model, the kinetics of radiation-induced damaged segments is described using a stochastic solution to ordinary differential equations which describe that each damaged segment can either be repaired or snipped. Supplementary Figure S3(A) displays a sample graph showing the number of undamaged, damaged and snipped segments as a function of post irradiation time. We first defined the snip reaction rate constant (α S ) as a function of radiation dose (refer to Supplementary Figure S3(B)). The linear quadratic dose function was then selected because we assumed that all damaged segments would be repaired or snipped at about 30 days after radiation exposure and that there should be a significant difference between 10 days and 30 days post exposure time for a 10 Gy radiation dose based on experimental obsevations 15,17 . In Supplementary Figure S3(C), we assumed that the repair reaction rate constant (α R ) is a fraction of α S . We decided to use α R = 0.5*α S since it leads to a plausible number of repaired (included in undamaged) and snipped number of segments.
Comparison of our modeling results with the experimental data for granule cell layer neurons are shown in Fig. 3. Modeling results of dendritic damages induced by γ-rays ( Fig. 3(A)) and by proton irradiation (Fig. 3(B)) at 10 days and 30 days post exposure times are comparable to the reported experimental data 15,17 , measured from thin slices of brain tissue that contain populations of neurons. Experimental observations from slices of brain tissue should consider differences in the number of cells observed between controls and irradiated tissues due to irradiation induced apoptosis. To translate our modeling results of dendritic structural changes from a single neuron to populations of neurons, we estimated characteristic doses for cell losses of D 0 = 25 Gy for γ-rays and D 0 = 18 Gy for proton radiation. These values are evaluated from reported experimental data [43][44][45] but we also consider that neuron death could be via soma death, excessive dendritic branch snipping (maybe parallel to growth cone collapse) or other forms of apoptosis and/or autophagy. Reported experimental data for neuron death is either expressed by soma death evaluated by DAPI staining and TUNEL assay 43 or by excessive dendritic branch snipping which is maybe parallel to growth cone collapse that leads to apoptosis 44,45 . Our estimated values of D 0 for γ-rays and proton ion beams are interpolated based on these reported experimental data for X-rays and carbon ion beams [43][44][45] and relative biological effectiveness of different radiation quality. On the contrary, experimental data of proton radiation-induced damages at 42 days post exposure time are measured from a single neuron using Golgi staining 19 and are simulated by our model. In Fig. 4, our modeling results for CA1 pyramidal neurons are compared with the reported experimental data obtained by imaging single neurons 19 . Both apical and basal dendritic damages acquired by our model are similar to the experimental results.
Modeling results of dendritic damages obtained from imaging single neurons versus populations of neurons and the time-dependent dendritic damages are shown in Fig. 5. Significant dendritic damage between single neurons and populations of neurons are revealed for γ-ray doses >1 Gy and proton radiation doses >0.5 Gy at 10 days post irradiation, while γ-ray doses >2 Gy and proton radiation doses >1 Gy at 30 days post irradiation. Moreover, dendritic damage of single neuronal measurements induced by γ-rays are significantly different at 10 days and 30 days post irradiation for a dose as low as 0.5 Gy, while damage caused by proton radiation is only significantly different at 10 days and 30 days post exposure time for doses >1 Gy. Also, similar dendritic damage is manifested from 30-42 days after exposure to proton radiation.
Additional modeling results are presented in Fig. 6 for both GCL and CA1 pyramidal neurons in the form of Sholl analysis and the dose-dependent number of snips. All graphs of Sholl analysis revealed no significant differences between the unirradiated and irradiated neurons, except for 10 Gy of γ-rays on GCL neurons where it shows significant reductions in dendritic arborization between 100 μm to 150 μm from the soma.
Discussion
Understanding the structure-function relationship of neurons is important to elucidate how alterations in dendritic structure, along with spine morphology that affects synaptic inputs and integration, can influence cognition. Studies have analyzed how the morphology of hippocampal GCL and CA1 pyramidal neurons impact their functional properties 41,46 . In this paper, we develop a model that describes the time dependent alterations in neuronal dendrites of hippocampal neurons (GCL and CA1 pyramidal neurons) induced by exposure to low LET radiation such as X-rays, γ-rays and protons. Our model consists of a probabilistic component that assessed which segments would be damaged by radiation exposure, and a mathematical constituent involving ordinary differential equation to describe the kinetics of damaged segments and determine how many segments would be repaired or snipped as a function of post irradiation time. The damage probability of a given segment is dependent on radiation dose and neuronal segment volume. We associate the energy deposition of ionizing radiation with the parameter D d that depends on the segment volume (V s ). We assumed that D d is defined by the Hill-type function, which provides a way to quantify the degree of dependency of D d on V s through the Hill coefficient (η) with a saturation dose equivalent to parameter D m . Moreover, we assume that each dendritic segment is discrete, thus, ordinary differential equations describing the kinetics of damaged segments are stochastically solved using the Gillespie algorithm 47 . Difference in parameter estimates of Table 1 for γ-rays and protons suggest protons are more effective which is likely due to differences in microscopic energy deposition, which includes a component from nuclear recoil nuclei and neutrons. In addition, filopodia and immature dendritic spine structures, where most excitatory synapses occur 48 , have been reported to be altered by radiation [14][15][16][17] , therefore, might affect radiosensitivity of dendrites. Our current model did not take into consideration dendritic spine structures and density in determining dendrite radiosensitivity. Future work will consider radiation effects on spine stability and the possibility that reductions in spine density influence dendritic morphology.
In the experiments considered 15,17,19 a small number of mice per group (4 to 6) possibly leading to inter-animal variability in neuron responses. In Fig. 2, simulated neurons represent hippocampal granule cell and CA1 pyramidal neurons of young adult mice, which is the typical mouse age (1 to 4 months) used in experimental studies for radiation-induced neuron damages. Furthermore, less variability in neuron morphometric parameters were observed in young adult mice as final steps in brain development occur at 20-30 days post conception 49 . In silico neurons shown in Fig. 2 are generated using estimated parameters (α, β, L i , total dendritic length, etc.) based on neuron morphometric specifications reported in young adult mouse experiments. Our dendritic growth model can be used to simulate neurons of different age of mouse models by modifying these estimated parameters.
As presented in Figs 3 and 4, our model accurately recapitulates the dendritic morphological changes caused by exposure to low LET radiation. These modeling results have utilized experimental data derived from measurements of imaged neuronal populations 15,17 or single neurons 19 . Due to the role of radiation-induced neuronal death 43-45,50 , our model predicts significant differences in measurements from imaging populations of neurons from brain tissue slices in contrast with single neuron imaging. These differences occurred at distinct radiation doses that depend on the type of radiation and post irradiation times. Note that in Fig. 5B, dendritic morphological changes at 30 and 42 days post irradiation are very similar. This is due to our assumption that all damaged segments would be repaired or snipped at about 30 days after radiation exposure. For future work, we can incorporate the delayed damage induced by activated microglia 10,11,51,52 to have a more precise description of morphological change at more protracted times after irradiation. Sholl analysis is a valuable tool to identify morphological characteristics of a neuron through dendritic arborization. Moreover, this analysis tool is also helpful in providing information useful in deciphering the mechanism/s responsible for the remodeling of neuronal structure caused by any agent 53 . For instance, pyramidal neurons have two main dendritic tree domains, apical and basal, which have different dendritic arborization patterns as delineated by Sholl analysis. Apical and basal dendrites have distinct synaptic inputs, excitability and modulation, although the degree and extent with which they function differently with one another and to other dendritic domains remains unclear 54,55 . Synaptic inputs on different dendritic domains or locations can be integrated differently to influence a particular neural activity related to certain cognitive outcomes 54 . Stress is known to cause morphological alterations in apical dendrites but not in basal dendrites of hippocampal pyramidal neurons 53,56,57 . Specifically, chronic immobilization stress reduces dendritic arborization of CA3 apical dendrites from 100 μm to 250 μm distance away from the soma 57 . In our radiation-induced dendritic damage model, arborization of CA1 apical dendrites appear to decrease from 80 μm to 140 μm from the soma (Fig. 6(C)), although not significantly, a finding that may well change following higher radiation doses. While speculative at present, this example does show the potential utility of our model in predicting different patterns of morphological alterations caused by radiation compared to other stressors or severing agents.
Another important factor that might affect radiation-induced changes in neuronal dendrites is the age of mouse models. Alterations in dendritic morphology, along with cellular connectivity, gene expression, ion potential dysregulation and other factors that may alter network connectivity and dynamics of neuron are shown to be correlated with age-related cognitive and behavioral dysfunction 58 . Furthermore, young mice have more active neurogenesis, a process that diminishes significantly with age 22,23,[59][60][61] . Developing dendrites of adult-born neurons undergo pruning to attain homeostasis with neurons of similar dendritic structure 62 . Radiation sensitivity typically decreases with age as dividing cells and cell undergoing active metabolic processes are typically more sensitive. However, less is known about the dependence of the radiation sensitivity of dendrites with age. Along with dendritic "snipping" caused by radiation exposure, the possibility that more damage might be observed in neurons undergoing active pruning at younger ages should be considered. Nevertheless, the age-dependence of radiation-induced dendritic damage can be included in our mathematical model by modifying parameters of the characteristic dose (D d ) in equation (4) such that apparent parameters K and D m can depend on dendrite age and/ or by adding a term in equations (5), (6) and (7) with parameters that represent "active pruning" at younger ages. Due to lack of experimental data showing radiation sensitivity of different neuron ages, we opt not to include "age" of neurons in our current model.
In our model, we assumed that radiation-induced changes in neuronal morphology are caused by "snipping" via dendritic fragmentation. There are two widely known cellular mechanisms of dendritic pruning: branch retraction and local degeneration or fragmentation that have been observed in drosophila 63 with less known in rodents. The latter was observed to be the mechanism in proximal dendrites while the former occurred in distal branches and in proximal dendrites after fragmentation. Both mechanisms involved destabilization of microtubule cytoskeleton after the severing event, followed by microtubule thinning and then phagocyte-aided fragmentation and/or retraction 63 . The mechanism of radiation-induced damages in dendrites has not been established. We considered in our model that "snipping" through fragmentation is the damage mechanism (time-dependent) induced by radiation since minimal model parameters is required for this mechanism in contrast to retraction mechanism, which would require retraction rate related parameters. Future considerations in modeling damage mechanism by retraction can be made once experimental data is available.
In conclusion, we have developed an in silico model that describes changes in dendritic morphometric parameters induced by low LET radiation and that can also predict different patterns of morphological change compared to other stressors or dendritic damaging-agents (e.g. neurodegenerative diseases, chemotherapeutic drug, radiation) through Sholl analysis. Microdosimetric models of segment energy deposition spectra developed to consider heavy ion irradiation 64-66 will be considered for future work, and compared with the results obtained using average segment dose that are presented in this paper.
Methods
Dendritic growth model. Computer modeling of neuronal morphology is a useful tool to understand structure-function relationships and recognize the role of structural changes in producing functional deficits in the brain. We have developed an in silico three-dimensional representation of dentate granule cell neurons in the hippocampus. Neuronal dendritic trees and branching patterns are formed with the following assumptions and morphometric determinants: (1) dendritic trees are defined by number of segments, branch points and total lengths, and are constrained to fit into a specified volume, (2) elongation and branching of individual dendrites are described as stochastic processes where probability of branching is a function of the distance grown from the soma or from the previous branch point, (3) diameter of dendrites are continuously decreasing for every elongation and branching step, and (4) isoneuronal avoidance of new fragments or growing segment is considered (24).
To generate in silico neurons, cylindrical branches are grown stochastically from the neuron cell soma with an initial radius of 3 μm and segment step of twice the radius (cylindrical aspect ratio of 1:1). Each step can either undergo elongation or branching, and we have assumed that probability of branching (P br ) of each dendritic branch is described by the exponential function: Figure 6. Sholl analysis and dose-dependent snip distribution of GCL neuron (A,B) and CA1 pyramidal neuron (C,D). (A) GCL neuron exposed to γ-rays, (B) GCL neuron exposed to proton IR, (C) Apical CA1 pyramidal neuron exposed to proton IR, (D) Basal CA1 pyramidal neuron exposed to proton IR. (Error bars represent standard error of the mean). where L i is the distance or segment length from the soma or previous branch point, α and β are parameters that characterized a specific branching probability. For our simulation, we have assumed that hippocampal neurons (granule cell and pyramidal neurons) have the same parameters as in mouse cerebral Purkinje cells 26,28 . We used parameter β equal to 0.264 28 while parameter α varies from 0.1 to 0.3 depending on the branch order to be consistent with the reported experimental morphometric parameters in mouse hippocampal neurons. Furthermore, dendritic radius is continuously decreasing for every elongation or branching step until it reaches 0.2 μm at the dendritic tips. Decrease in dendritic radius for each elongation step is defined by a taper rate and we assumed that mouse hippocampal granule cell layer neuron has the same taper rate as in rat hippocampal pyramidal neuron 36 .
On the other hand, we defined the decrease in dendrite radius for every branching using the relationship: where R p is the parent dendrite, and R d1 , R d2 are the daughter dendrites 27,37 . We have assumed that diameters of daughter dendrites after branching are the same, such that R d1 = R d2 = R d .
One unique feature of our in silico neurons is that each dendritic segment and branch, and each branch point has a unique index or identification (ID) number, which enables us to monitor changes in neuronal dendritic structures that might be caused by any damages.
Neuronal dendritic structure after exposure to radiation. Changes in neuronal dendritic structure caused by exposure to low linear energy transfer (LET) radiation, such as X-rays, γ-rays and protons, is evaluated using average segment dose. Radiation-induced dendritic damages are expressed in Sholl analysis and as fraction of irradiated over unirradiated (X/X 0 ) morphometry parameters, such as total dendritic length (tL/tL 0 ), number of segments (BN/BN 0 ) and number of branch points (BP/BP 0 ).
In our radiation-induced dendritic damage model, number of snips or snip sites on dendritic segments are stochastically determined in these steps: (1) Each dendritic segment is assessed if it is damaged after radiation exposure (IR) using a probability function that is dependent on radiation dose (D) and neuronal segment volume (V s ). (2) Each damaged segment can either be repaired or snipped depending on the kinetics of IR-induced damaged segments. All damaged segments are arranged in increasing damage probability (P d ).
(3) The time-dependent number of snips is evaluated using the kinetics of damaged segments represented by ordinary differential equations. Damaged segments with high damage probability will have a higher priority in snipping.
The probability that a dendritic segment is damaged after exposure to low-LET radiation is described using the exponential function: where D is the average segment dose and D d is the characteristic dose where 37% of the segments are undamaged. D d depends on the segment volume (V s ) and we assumed that it is defined by the Hill-type function below, with apparent parameters K and D m , and Hill coefficient, η: Each damaged segment is either repaired or snipped S 0 . The number of repaired or snipped segments is characterized by the following ordinary differential equation: where S 0 , S d and S s represent undamaged/repaired, damaged and snipped segments, respectively, and α d , α R and α S are the damage, repair and snip reaction rate constants, respectively. For acute IR, the first term in equations (5) and (6) are not considered, with initial number of undamaged and damaged segments stochastically determined by P d and initial snipped segment equal to zero. Furthermore, we assumed that each dendritic segment is discrete, therefore, the above ordinary differential equations are stochastically solved using Gillespie algorithm 47 .
SCIeNtIFIC REPORTS | (2018) 8:5494 | DOI:10.1038/s41598-018-23855-9 Neuronal dendrite structural changes induced by radiation exposure can be experimentally monitored in several ways. Golgi staining method may be used to image individual neurons and evaluate structural changes in a single neuron 19 . A more sensitive and robust method using neurons expressing enhanced green fluorescent protein (eGFP) could monitor structural changes but experimental data are reported as population of neurons 15,17 . To convert our modeling results of structural changes from a single neuron to population of neurons, we used a factor derived from the survival of neurons represented by the exponential function: where F N is the fraction of surviving neurons after irradiation, D is the radiation dose and D 0 is a characteristic dose where 37% of neurons survived. Translating dendritic structural changes from a single neuron to population of neurons can be determined using: where X and X 0 refers to irradiated and unirradiated morphometry parameters, respectively.
Data analysis and mathematical modeling. All figures and plots, data fitting and analysis, modeling and computer simulation of neurons are accomplished using Matlab 2016a (Mathworks, Inc.). Differential equations describing the kinetics of radiation-induced damaged segment is solved using Gillespie algorithm written in Matlab.
|
v3-fos-license
|
2020-07-30T02:06:27.038Z
|
2020-07-28T00:00:00.000
|
220893116
|
{
"extfieldsofstudy": [
"Medicine",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/20/15/4187/pdf",
"pdf_hash": "523a1e224bb079f7fda95c27c6c34ac603cc1188",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:739",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "f4e7ef20de949c2e894421270d46660e2b34da27",
"year": 2020
}
|
pes2o/s2orc
|
Disturbance Recognition and Collision Detection of Manipulator Based on Momentum Observer
Increasing requirements for the safety of human-robot interaction and the cost-effectiveness of collision detection rapidly promote the development of collision detection technology without torque sensors. To address nonlinear disturbance factors in collision detection that may cause unstable or even incorrect detection, this paper proposed a research strategy that considered the friction as the disturbance term in manipulator motion for the collision detection. The manipulator joint disturbance model was established based on the LuGre dynamic friction model, and the external torque observer was designed based on the generalized momentum. Then, the friction measurement was realized using the external torque observer, and the model parameters were identified through the genetic algorithm. The collision detection can be reduced errors after the friction model by compensating the disturbance and can be applicable to variable working conditions. Finally, the accuracy of the constructed disturbance model and the performance of the proposed collision detection method were validated by the experimental studies.
Introduction
With the expansion of robot applications ranging from industrial environments to home services, medical treatments, and space exploration, human-robot collaboration has become a hot topic [1,2]. Compared with industrial robots that can only work in the fence, collaborative robots can share working spaces with humans. The safety of human, environment, and robot body in the unknown environment is worthy of studying in depth [3]. Compared to traditional industrial robots operating in the structured environment, the working environment of collaborative robots is not isolated and unpredictable. The International Organization for Standardization (ISO) has proposed relevant technical standards about collaborative robot safety, including safety specifications, risk assessments, and so on, to minimize the potential risk caused by human-robot collision [4]. Solving safety problems can start with collision detection. When the robot is running in an unstructured environment, uncertainty in the workspace causes the human-robot accidental collision [5]. The collaborative robot with safety and dexterity can detect each joint torque and respond to the collision detection. Optimizing human-robot collision detection in unpredictable environments is the most critical and urgent issue facing collaborative robots. At present, some scholars have made efforts in this area and achieved certain achievements [6][7][8].
The traditional methods applied for the collision detection of collaborative robot often adopt the scheme of installing an external sensor, such as a joint torque sensor [9,10]. Haddadin et al. constructed an external torque observer based on generalized momentum to achieve collision detection on variable robots with joint torque sensors such as DLR LWR-III, KUKA LBR iiwa, and FRANKA EMIKA [11]. They further analyzed the energy characteristics to improve control based on joint torque information The rest of the article is organized as follows. In Section 2, the derivation of the manipulator dynamic model is introduced, and the construction of the second-order external torque observer is described in detail. Establishment of the LuGre friction model, observation of the friction torque based on the observer, and identification of the model parameters by the genetic algorithm are presented in Section 3. Section 4 sets up the collision detection experiments based on the physical manipulator and analyzes the verification results. Section 5 provides some conclusions drawn from this research.
Manipulator Dynamics Modeling
When collisions between the manipulator and the outside occur, it is equivalent to the external force acting on the collision point, which generates additional torque on each manipulator joint. The premise of accurately analyzing the collision moment of the manipulator is to establish a precise dynamic model [28,29].
Based on the improved D-H coordinate system, the Newton-Euler recursion formula is used to establish the manipulator dynamics equilibrium equation. The manipulator dynamics model can be expressed as Equation (1): where M(q) ∈ R n×n represents the symmetrical positive definite inertia matrix of the manipulator link, C(q, . q) ∈ R n contains the centrifugal moment and Coriolis moment vector of the manipulator link, G(q) ∈ R n represents the moment of gravity, B is the motor inertia matrix, K J is the joint stiffness matrix, D J is the joint damping matrix, τ F,q is the friction torque of the manipulator link, τ F,θ is the friction torque on the motor side, τ m is the output torque of the motor, q indicates the position of the manipulator link, θ indicates the motor position, and . q indicates the movement speed of the manipulator.
In addition, the output torque of the joint can be defined as τ J . Set up τ J = K J (θ − q) and ignore the joint damping term D J [11]. Therefore, the dynamic model of the manipulator Equation (1) can be simplified as Equation (2): When the manipulator collision occurs, the joint torque increases. So, the dynamic model can be modified by Equation (3): where τ ext is the external torque of the collision.
External Torque Observer Design
Haddadin et al. proposed an external torque observer based on generalized momentum to detect the collision [11]. In this paper, the disturbance force was classified as an external force. An external torque observer based on generalized momentum was constructed by referring to the observer design. When no collision occurs, this observer was a disturbance observer in fact. Since the friction force was the main part of the disturbance force, the friction torque of each joint can be observed through the disturbance observer to avoid the introduction of joint angular acceleration ..
q.
Sensors 2020, 20, 4187 4 of 15 The generalized total momentum p tot of the manipulator system can be defined as Equation (4): where p q is link momentum, and p θ is motor momentum. Derivation of Equation (4) to time t yields Equation (5): According to the basic property of oblique symmetry in manipulator dynamics: .
q), it can also be expressed as q). Simplify Equation (5) to get Equation (6): where K I represents the product of the torque constant of the joint motor and the reduction ratio of the joint reducer, and i is the motor output current value, ignoring the torque caused by the motor inertia B ..
θ.
When no collision occurs with the manipulator, τ ext = 0. The above τ F,tot is expressed as In order to meet the requirements of fastness and stability, and avoid the shortage of the first-order system with few adjustment parameters, the first-order observer was optimally designed to construct a second-order external torque observer, and its dynamic model was modified as Equation (7): where K 1 , K 2 are the diagonal gain matrix. External torque observer r F can be expressed as Equation (8): Derivation of Equation (8) to time t: ..
r F + K 1 K 2 r F . According to the Laplace transform, the second-order observer can be represented by the multiplication of a first-order low-pass filtering and a first-order high-pass filtering, and Equation (9) can be drawn as follows: The low-pass and high-pass filter was appropriately designed to analyze the collision frequency, including high-frequency torque components of fast and strong impacts and low-frequency torque components of slow and continuous contacts. Considering that there may be a large overshoot and steady-state error in the second-order external torque observer [30,31], while reducing the detection delay and oscillation, the control algorithm was added to the second-order external torque observer, and the improved observer was defined as Equation (10): Sensors 2020, 20, 4187
of 15
where K 3 is the diagonal gain matrix.
According to the Laplace transform, Equation (11) can be drawn as follows: To detail the logical architecture of proposed estimation method, the process of using the external torque observer to identify the LuGre model and obtain the observed external torque after the LuGre model compensation is shown in Figure 1.
delay and oscillation, the control algorithm was added to the second-order external torque observer, and the improved observer was defined as Equation (10): where 3 K is the diagonal gain matrix.
According to the Laplace transform, Equation (11) can be drawn as follows: To detail the logical architecture of proposed estimation method, the process of using the external torque observer to identify the LuGre model and obtain the observed external torque after the LuGre model compensation is shown in Figure 1.
Friction Modeling and Parameter Identification
Friction models mainly include static models such as Coulomb-Viscous friction model and Stribeck model, and dynamic models such as Dahl and LuGre. The LuGre model is based on the average deformation of the bristle, using first-order differential equations to describe Coulomb friction, viscous friction, Stribeck friction, presliding friction, variable static friction, friction hysteresis, and so on [32,33]. The static characteristics and dynamic characteristics of friction can be well described by the LuGre model. It is a dynamic friction model that is relatively complete and easy to implement.
The accuracy of the friction model significantly affects the performance of the manipulator. In this paper, the LuGre model was selected to describe the manipulator joint friction. This exponential model can greatly reflect the nonlinear characteristics of friction. Since the motion type of each manipulator joint is rotation, the friction torque , f tot can be expressed by Equation (12):
Friction Modeling and Parameter Identification
Friction models mainly include static models such as Coulomb-Viscous friction model and Stribeck model, and dynamic models such as Dahl and LuGre. The LuGre model is based on the average deformation of the bristle, using first-order differential equations to describe Coulomb friction, viscous friction, Stribeck friction, presliding friction, variable static friction, friction hysteresis, and so on [32,33]. The static characteristics and dynamic characteristics of friction can be well described by the LuGre model. It is a dynamic friction model that is relatively complete and easy to implement.
The accuracy of the friction model significantly affects the performance of the manipulator. In this paper, the LuGre model was selected to describe the manipulator joint friction. This exponential model can greatly reflect the nonlinear characteristics of friction. Since the motion type of each manipulator joint is rotation, the friction torque τ f ,tot can be expressed by Equation (12): where z is the state variable representing the average deformation of the bristle, ω is the rotation angular velocity, σ 0 is the bristle stiffness coefficient, σ 1 is the microdamping coefficient; σ 2 is the viscous friction coefficient, τ c is the Coulomb friction, τ s is the static friction, and ω s is the Stribeck speed. The accuracy of identification parameters in the friction model largely determines the credibility of solving practical problems based on the model. Therefore, it is necessary to effectively identify the friction model parameters based on an appropriate identification algorithm [34,35]. In this paper, the genetic algorithm with strong parallel iteration ability wa selected to identify parameters in the friction model. There are six unknown parameters (σ 0 , σ 1 , σ 2 , τ c , τ s , ω s ) in Equation (12) to be identified.
To facilitate the identification of model parameters, that is, to identify the above six dynamic and static parameters at once, the LuGre friction model was improved and discretized. The microdisplacement z and microvelocity dz dt were used as the intermediate variables of the system to simplify the model and avoid measuring. Assuming that the discretized sampling time interval is ∆T and the discretized time is k, the recursive formula of the discretized LuGre friction model can be drawn as Equation (13): Because the LuGre friction model is highly nonlinear and has first-order differential terms, it is easy to fall into the local optimal problem during the process of the model parameter identification by genetic algorithm [36]. In response to the above, different initial rounds of evolutionary optimization initial parent populations were set up to solve the local optimal problem of parameter identification for highly nonlinear differential LuGre friction model.
Let the error e of friction torque as Equation (14): where τ F,tot is the observation of friction torque based on the designed observer, and τ f ,tot is the calculation of friction based on the LuGre friction model. Define the objective function of the genetic algorithm as Equation (15): where N is the sampling times. The goal of identification is to minimize the objective function J. This study supposed that the manipulator moves in a sinusoidal trajectory as y = 0.6 sin(6.28 * 0.2 * t), with the time interval of 0.008 s and obtains a total of 1251 samples. The genetic algorithm parameters were set as follows: The initial population was 200, the evolution generation was 300, the selection crossover probability was 0.8, and the mutation probability was 0.2. Besides, the range of parameters to be identified was set up as follows: σ 0 ∈ (60000, 100000], σ 1 ∈ (0, 500], The parameter identification of the three-joint manipulator by the genetic algorithm was performed. Since the initial speed and the end speed of the planned sinusoidal trajectory were not 0, in order to avoid the impact of data errors during start or stop and protect the manipulator, the fifth-degree polynomial curve was used to connect the sinusoidal trajectory. Make the manipulator run for several cycles and obtain the observation data in the middle stable operation cycle to identify the friction model parameters. Take the average of 10-time identification results, as shown in Table 1. The six unknown parameters in the discretized LuGre model were identified by the genetic algorithm. Moreover, the trouble with partial effectiveness of linear identification in the two-step identification method was avoided, and the accuracy of parameter identification was improved. Based on the identified model parameters in Table 1, the joint friction torque can be calculated by the LuGre model.
Observation of the disturbance based on the designed observer and calculation of friction based on the identified model are shown in Figure 2. It can be drawn that the results of the disturbance observation are almost consistent with the results of the friction calculation. In this study, the calculation results are also further verified.
The parameter identification of the three-joint manipulator by the genetic algorithm was performed. Since the initial speed and the end speed of the planned sinusoidal trajectory were not 0, in order to avoid the impact of data errors during start or stop and protect the manipulator, the fifthdegree polynomial curve was used to connect the sinusoidal trajectory. Make the manipulator run for several cycles and obtain the observation data in the middle stable operation cycle to identify the friction model parameters. Take the average of 10-time identification results, as shown in Table 1. The six unknown parameters in the discretized LuGre model were identified by the genetic algorithm. Moreover, the trouble with partial effectiveness of linear identification in the two-step identification method was avoided, and the accuracy of parameter identification was improved. Based on the identified model parameters in Table 1, the joint friction torque can be calculated by the LuGre model.
Observation of the disturbance based on the designed observer and calculation of friction based on the identified model are shown in Figure 2. It can be drawn that the results of the disturbance observation are almost consistent with the results of the friction calculation. In this study, the calculation results are also further verified. The friction is complex and variable, and it is highly related to the joint speed direction. It is difficult to extract the friction from the joint torque. A reliable method for the above problem was proposed by the authors of [37]. Considering that the speed direction is independent of the calculated joint torque by dynamics model, the two types of trajectories are characterized by the same position, acceleration, and speed value, and the opposite speed direction. Subtracting the joint torque values of these two trajectories obtains the doubled friction values. According to the above procedure, the friction during the sinusoidal trajectory performed by the manipulator is extracted from the output motor torque m . To quantify the error between the calculated friction value and the actual friction value, the RMS (root mean squared) error is calculated in Table 2. The friction is complex and variable, and it is highly related to the joint speed direction. It is difficult to extract the friction from the joint torque. A reliable method for the above problem was proposed by the authors of [37]. Considering that the speed direction is independent of the calculated joint torque by dynamics model, the two types of trajectories are characterized by the same position, acceleration, and speed value, and the opposite speed direction. Subtracting the joint torque values of these two trajectories obtains the doubled friction values. According to the above procedure, the friction during the sinusoidal trajectory performed by the manipulator is extracted from the output motor torque τ m . To quantify the error between the calculated friction value and the actual friction value, the RMS (root mean squared) error is calculated in Table 2. Considering other nonlinear factors, the RMS error is a bit large. However, it is still in the within the range of theoretical error, the accuracy of model identification results is proved.
According to the LuGre friction model obtained by the identification, the friction of each operating manipulator joint can be calculated in real time and be employed to compensate the external torque observed by the external torque observer. The purposes of this strategy are to decrease the error caused by the friction and other disturbances in the external torque and raise the precision of the external torque observer enough to make external torque observer applicable to collision detection.
Collision Detection and Experiment Validation
In this paper, the experiments based on the modular light cooperative manipulator developed independently by the laboratory have been conducted. The experiment platform was composed of a manipulator body and an industrial control computer. The industrial computer used the Xenomai core to expand Linux [38,39], which can meet the real-time control requirements of the manipulator and communicate with the servo driver through the CAN bus. The structure of the experimental platform and control system is given in Figure 3a,b. The size parameters of manipulator links are given in Figure 3c. The input side of each manipulator joint was equipped with the incremental encoder, and the output side was equipped with the absolute encoder, with position of the encoders shown in Figure 3d. The absolute encoder was characterized by 19-bit single-turn with the resolution 0.0007 and the repeated positioning accuracy of 0.001 • , and the incremental encoder was characterized by 2500 pulses with the resolution of 10000. The parameters of position and speed of the joint were required for the external torque observer designed in our research. They were measured by a high-precision absolute value encoder and an incremental encoder, respectively. After compensation by the LuGre friction model, the external torque changes of the manipulator could be observed in real time. However, due to the disturbance term concluding the friction and other nonlinear factors, a few errors existed between the value observed by the external torque observer and the actual external torque value. In the collision The parameters of position and speed of the joint were required for the external torque observer designed in our research. They were measured by a high-precision absolute value encoder and an incremental encoder, respectively. After compensation by the LuGre friction model, the external torque changes of the manipulator could be observed in real time. However, due to the disturbance term concluding the friction and other nonlinear factors, a few errors existed between the value observed by the external torque observer and the actual external torque value. In the collision detection experiment, this could be handled by setting a threshold to avoid false detection.
According to standards in ISO/TS 15066:2016, since the manipulator body shapes with a cylindrical design in that a large contact area when a collision occurs, the actual collision pressure is much smaller than the maximum allowable pressure, so the actual collision force becomes the main consideration. When the manipulator makes the operation occur without disturbance for a long time [0, T], the maximum value of the observed external torque by the observer is set as a threshold, defined as µ max = max µ(t) , t ∈ [0, T] , where µ(t) is the observed external torque after the LuGre model compensation. Considering the stability of collision detection, the threshold adding a small safety margin ε sa f e > 0 is converted to ε = µ max + ε sa f e . ε = 7.80 N·m is selected as a reasonable threshold determined by six hours of experiment observation. The threshold range satisfies the human injury threshold recommended in ISO/TS 12066-2016. The occurrence of collision detection is judged by the function of threshold collision detection f (µ(t)) = 1, µ(t) > ε 0, µ(t) < ε , where the i link is involved in a collision, determining the elements in the fault signature matrix η by the above function, the isolation of collision link is achieved by The parameters of position and speed of the joint were required for the external torque observer designed in our research. They were measured by a high-precision absolute value encoder and an incremental encoder, respectively. After compensation by the LuGre friction model, the external torque changes of the manipulator could be observed in real time. However, due to the disturbance term concluding the friction and other nonlinear factors, a few errors existed between the value observed by the external torque observer and the actual external torque value. In the collision detection experiment, this could be handled by setting a threshold to avoid false detection.
According to standards in ISO/TS 15066:2016, since the manipulator body shapes with a cylindrical design in that a large contact area when a collision occurs, the actual collision pressure is much smaller than the maximum allowable pressure, so the actual collision force becomes the main consideration. When the manipulator makes the operation occur without disturbance for a long time where the i link is involved in a collision, determining the elements in the fault signature matrix η by the above function, the isolation of collision link is achieved by In order to verify the effectiveness of the disturbance recognition method based on the external torque observer and the performance of the collision detection between the manipulator and the outside, it is assumed that the task of the manipulator is to grab the workpiece located at the lower left of the manipulator. When working, we consciously touch its links from all direction to simulate the possible collision during the process of actual human-robot collaboration. In the human-robot collision, there is no pain in the human body. Since the average pain tolerance of the human arm is 150~160 N, the contact force is guaranteed to be less than 150 N. So, the intensity of the collision is in accordance with the safe collision amplitude specified in ISO/TS 15066:2016.
. In order to verify the effectiveness of the disturbance recognition method based on the external torque observer and the performance of the collision detection between the manipulator and the outside, it is assumed that the task of the manipulator is to grab the workpiece located at the lower left of the manipulator. When working, we consciously touch its links from all direction to simulate the possible collision during the process of actual human-robot collaboration. In the human-robot collision, there is no pain in the human body. Since the average pain tolerance of the human arm is 150~160 N, the contact force is guaranteed to be less than 150 N. So, the intensity of the collision is in accordance with the safe collision amplitude specified in ISO/TS 15066:2016.
In this paper, multiple collision experiments in different directions were arranged, and a total of three different collision experiments were described in detail. The manipulator moved from the initial pose shown in Figure 4a. The first experiment was a waist joint collision, the second experiment was a shoulder joint collision, and the third experiment was an elbow joint collision. The test plan and experiment effect are illustrated in Figure 4b-d.
After colliding during the operation of the manipulator, the joint torque increased dramatically. The external torque observation value suddenly changed and was obviously different from the normal observation value. Therefore, the occurrence of the manipulator collision could be determined by the external torque observation value exceeding the threshold. The observed external torque curves of the above collision experiments are shown in Figures 5-7, including the one without friction compensation and the one after friction compensation. Through the comparison of results in Figures 5-7, it can be seen that the deviation of the observed external torque was large without friction compensation, but the deviation after friction compensation was significantly reduced. Friction compensation could effectively shrink the collision detection threshold and improve the detection sensitivity of the proposed collision detection methodology.
accordance with the safe collision amplitude specified in ISO/TS 15066:2016.
In this paper, multiple collision experiments in different directions were arranged, and a total of three different collision experiments were described in detail. The manipulator moved from the initial pose shown in Figure 4a. The first experiment was a waist joint collision, the second experiment was a shoulder joint collision, and the third experiment was an elbow joint collision. The test plan and experiment effect are illustrated in Figure 4b-d. After colliding during the operation of the manipulator, the joint torque increased dramatically. The external torque observation value suddenly changed and was obviously different from the normal observation value. Therefore, the occurrence of the manipulator collision could be determined by the external torque observation value exceeding the threshold. The observed external torque curves of the above collision experiments are shown in Figures 5-7, including the one without friction compensation and the one after friction compensation. Through the comparison of results in Figures 5-7, it can be seen that the deviation of the observed external torque was large without friction compensation, but the deviation after friction compensation was significantly reduced. Friction compensation could effectively shrink the collision detection threshold and improve the detection sensitivity of the proposed collision detection methodology. In the first collision experiment as shown in Figure 4b, the collision direction was set to be perpendicular to the plane composed of the second and third manipulator links, and the collision point was set at the end of the second link. This experiment was performed to simulate the humanrobot collision caused by the movement of the waist joint driving the manipulator to grab the workpiece in the horizontal plane. The results, as shown in Figure 5, demonstrate that the most obvious mutation existed in the external torque observation value of the waist joint. When the external torque observation value increased until exceeding the set threshold of 7.8 N, the occurrence of waist joint collision could be determined. The happened when the collision time was 7.104 s, actual detected collision time was 7.184 s, and detection delay time was about 0.08 s. When the collision In the second collision experiment as shown in Figure 4c, the collision point was set at the second link of the manipulator and the collision direction was set to be the tangent direction of the shoulder joint motion to prevent moving downward. This experiment was performed to simulate the humanrobot collision caused by the movement of the shoulder joint driving the manipulator to grab the workpiece in the vertical plane. It can be seen from Figure 6 that the collision had the greatest impact on the shoulder joint. When the external torque observation value of shoulder joint increased until exceeding the set threshold of 7.8 N, the occurrence of shoulder joint collision could be determined. The happened when the collision time was 5.872 s, actual detected collision time was 5.960 s, and detection delay time was 0.088 s. Due to a certain distance in space between the direction of the collision and the axis of the waist joint, a moment was generated to hinder the movement of the waist joint and affected its torque observation value. The impact on the elbow joint was the smallest due to the lack of contact with the third link. In the third collision experiment as shown in Figure 4d, the collision point was set at the end of the manipulator and the collision direction was perpendicular to the ground opposite to the movement direction of the manipulator. This experiment was performed to simulate the collision caused by the end effector of the manipulator during the movement. It can be obtained from Figure 7 that the external torque changes of the elbow joint and the shoulder joint were the clearest, because the collision directly hindered the movement of them and had the greatest impact on them. When the external torque observation value of elbow joint increased until exceeding the set threshold of 7.8 N, the occurrence of elbow joint collision could be determined. The happened when the collision time was 5.912 s, actual detected collision time was 6.008 s, and detection delay time was about 0.096 s. In the second collision experiment as shown in Figure 4c, the collision point was set at the second link of the manipulator and the collision direction was set to be the tangent direction of the shoulder joint motion to prevent moving downward. This experiment was performed to simulate the humanrobot collision caused by the movement of the shoulder joint driving the manipulator to grab the workpiece in the vertical plane. It can be seen from Figure 6 that the collision had the greatest impact on the shoulder joint. When the external torque observation value of shoulder joint increased until exceeding the set threshold of 7.8 N, the occurrence of shoulder joint collision could be determined. The happened when the collision time was 5.872 s, actual detected collision time was 5.960 s, and detection delay time was 0.088 s. Due to a certain distance in space between the direction of the collision and the axis of the waist joint, a moment was generated to hinder the movement of the waist joint and affected its torque observation value. The impact on the elbow joint was the smallest due to the lack of contact with the third link. In the third collision experiment as shown in Figure 4d, the collision point was set at the end of the manipulator and the collision direction was perpendicular to the ground opposite to the movement direction of the manipulator. This experiment was performed to simulate the collision caused by the end effector of the manipulator during the movement. It can be obtained from Figure 7 that the external torque changes of the elbow joint and the shoulder joint were the clearest, because the collision directly hindered the movement of them and had the greatest impact on them. When the external torque observation value of elbow joint increased until exceeding the set threshold of 7.8 N, the occurrence of elbow joint collision could be determined. The happened when the collision time was 5.912 s, actual detected collision time was 6.008 s, and detection delay time was about 0.096 s. In the first collision experiment as shown in Figure 4b, the collision direction was set to be perpendicular to the plane composed of the second and third manipulator links, and the collision point was set at the end of the second link. This experiment was performed to simulate the human-robot collision caused by the movement of the waist joint driving the manipulator to grab the workpiece in the horizontal plane. The results, as shown in Figure 5, demonstrate that the most obvious mutation existed in the external torque observation value of the waist joint. When the external torque observation value increased until exceeding the set threshold of 7.8 N, the occurrence of waist joint collision could be determined. The happened when the collision time was 7.104 s, actual detected collision time was 7.184 s, and detection delay time was about 0.08 s. When the collision contact occurred, there was the effect of the friction between the human body and collision area. At this time, the continuing movements of the shoulder and elbow joints were prevented by the friction, so the torque change was also observed in the shoulder and elbow joints during the collision. The observation value of the external torque is largely consistent with the theoretical analysis value.
In the second collision experiment as shown in Figure 4c, the collision point was set at the second link of the manipulator and the collision direction was set to be the tangent direction of the shoulder joint motion to prevent moving downward. This experiment was performed to simulate the human-robot collision caused by the movement of the shoulder joint driving the manipulator to grab the workpiece in the vertical plane. It can be seen from Figure 6 that the collision had the greatest impact on the shoulder joint. When the external torque observation value of shoulder joint increased until exceeding the set threshold of 7.8 N, the occurrence of shoulder joint collision could be determined. The happened when the collision time was 5.872 s, actual detected collision time was 5.960 s, and detection delay time was 0.088 s. Due to a certain distance in space between the direction of the collision and the axis of the waist joint, a moment was generated to hinder the movement of the waist joint and affected its torque observation value. The impact on the elbow joint was the smallest due to the lack of contact with the third link.
In the third collision experiment as shown in Figure 4d, the collision point was set at the end of the manipulator and the collision direction was perpendicular to the ground opposite to the movement direction of the manipulator. This experiment was performed to simulate the collision caused by the end effector of the manipulator during the movement. It can be obtained from Figure 7 that the external torque changes of the elbow joint and the shoulder joint were the clearest, because the collision directly hindered the movement of them and had the greatest impact on them. When the external torque observation value of elbow joint increased until exceeding the set threshold of 7.8 N, the occurrence of elbow joint collision could be determined. The happened when the collision time was 5.912 s, actual detected collision time was 6.008 s, and detection delay time was about 0.096 s. Due to the direction of the collision parallel to the axis of the waist joint, the impact on the waist joint was the smallest.
The experiment results illustrate that the external torque observation value fluctuated slightly around 0 while the manipulator operated normally, and the external torque observation value responded to abrupt changes quickly while the collision occurred. Through the simulation of different collision schemes, it can be drawn that the sudden change of the observed values is consistent with the theoretical analysis of the actual manipulator torque. Also, the fluctuation of external torque observation after friction compensation is significantly different from the collision torque, which can effectively detect the occurrence of the manipulator collision.
In order to verify the universal applicability and repeatability of the proposed collision detection method, 50 collision experiments were conducted with random collision direction in space. The collision points were randomly set at the second or third manipulator link. The experiment results have shown that the occurrence of collisions can be detected by the designed procedure, and the success rate of collision detection was 100%. In these collision experiments, the longest detection delay time was 0.096 s and the shortest was 0.064 s. With the manipulator colliding, the detected external torque was all less than 8 N, which ensures the sensitivity of collision detection.
Besides, this research scheme only requires collecting the information of the position, velocity, and current of each manipulator joint, which can avoid the influence of acceleration noise on external torque observation. This solution can be applied to detect the occurrence of collisions on most manipulators with current feedback. To ensure the safety of human-robot collaboration and reduce the risk caused by human-robot collision, it is necessary to make the manipulator switch current motion mode while detecting the collision and take it get out of the collision area. The simplest safe strategy is to stop the manipulator from moving, but it cannot leave the collision area when the squeeze collision occurs in the manipulator with human. Another solution is to take the manipulator reverse action and escape the collision area. This solution requires the construction of a fault signature matrix to locate the accurate collision isolation, and inaccurate position judgment may make the reverse action possible to cause secondary human damage. The safer method is to transfer the manipulator into zero-gravity mode. On this occasion, the servo controller is in the torque-control mode, the gravity and friction are overcome by the joint output torque, and the flexible of manipulator can ensure the collision safety.
Discussion and Conclusions
To weaken the impact of nonlinear disturbances on collision detection of sensorless manipulators, a method of disturbance recognition and collision detection based on external torque observer was studied. Regarding the friction as the main disturbance, this research analyzed the friction to establish the mathematical model. Employing the friction value observed by the external torque observer, this research achieved effective parameter identification via the genetic algorithm. Combining theory and experiment, this research arranged multiple sets of experiments to simulate collisions of the manipulator with human in motion and verified the accuracy of the collision detection by the external torque observer after compensating the friction disturbance.
Most of existing sensorless collision detection have complied friction compensation based on dynamic formulas and identified the friction parameters through online or offline solutions, ignoring the remaining nonlinear disturbance factors such as assembly gap and temperature. The dynamics and other parameters of the manipulator must be reidentified when working in different environments. In contrast, the proposed method of establishing a friction-based disturbance model by observing changes on disturbance in this paper can solve the above deficiency. The corresponding disturbance model can be directly transformed according to different working environments. Also, some of the remaining nonlinear disturbance factors are coupled in the identification process, and the identification process is simple and fast.
Overall, this research has successfully provided a low-cost disturbance recognition and collision detection method that can be applied to different working environments and improve the safety of human-robot interaction of cooperative manipulators. The results fully prove the proposed scheme feasible with engineering and theoretical significance. In further research, the disturbance model of the manipulator under variable working conditions will be analyzed and compared in detail.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2019-03-14T09:18:46.369Z
|
2013-07-19T00:00:00.000
|
108831685
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.ajol.info/index.php/gjedr/article/download/91018/80456",
"pdf_hash": "d0362550eb36c360e53c4224d3eaa89df890b0ed",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:740",
"s2fieldsofstudy": [
"Chemistry",
"Education"
],
"sha1": "d0362550eb36c360e53c4224d3eaa89df890b0ed",
"year": 2013
}
|
pes2o/s2orc
|
THE INFLUENCE OF INSTRUCTIONAL MATERIALS ON ACADEMIC PERFORMANCE OF SENIOR SECONDARY SCHOOL STUDENTS IN CHEMISTRY IN CROSS RIVER STATE
This research work investigated the influence of instructional materials (teaching aids) on students’ academic performance in senior secondary school Chemistry in Cross River State. A two group pre-test post test quasi-experimental design was adopted for the study. One research question and one hypothesis were formulated to guide the study. A total of 100 senior secondary one (SS1) Chemistry students were selected from five (5) Schools in Yakuur Local Government Area of Cross River State through simple random sampling and stratified random sampling techniques. Fifty SSI students (Experimental group) were taught with instructional materials and another forty (Control group) were taught without instructional materials. A validated Chemistry Achievement Test (CAT) was used to gather data for the study and a split-half was carried out using the Pearson product moment correlation to obtain a reliability coefficient of 0.67.Independent t-test was used to test the hypothesis at 0.05 significant level while the Pearson product moment correlation coefficient at that level was used to analyse the research questions. The study revealed that students taught with instructional materials performed significantly better than those taught without instructional materials and also that the use of instructional materials generally improved students’ understanding of concepts and led to high academic achievements . Recommendations were made on how to improve academic performance of chemistry students by encouraging the use of instructional materials in teaching-learning chemistry.
INTRODUCTION
Instructional materials serve as a channel between the teacher and the students in delivering instructions.They may also serve as the motivation on the teaching-learning process.It is use to get the attention of the students and eliminate boredom.Instructional materials are highly important for teaching; especially for inexperienced teachers.Teachers rely on instructional materials in every aspect of teaching.They need material for background information on the subject they are teaching.
Young teachers usually have not built upon their expertise whenever they enter into the field.Teachers often use instructional materials for lesson planning.These materials are also needed by the teachers to assess the knowledge of their students.Teachers often assess students by assigning tasks, creating projects, and administering exams.Instructional materials are essential for all of these activities.
Chemistry as a science subject is activity oriented and the suggested method for teaching it which is guided discovering method is resource based (NTI, 2007).This suggest that mastery of chemistry concepts cannot be fully achieved without the use of instructional materials.The teaching of chemistry without instructional materials will certainly result to poor performance in the course.Franzer et al. (1992) stressed that;a professionally qualified science teacher no matter how well trained would be unable to put his ideas into practice if the school setting lacks the equipments and materials necessary for him or her to translate his competence into reality.Basssey (2002) described instructional materials media as system component that may be used as part of instructional process which are used to disseminate informative message and ideas or which make possible communication in the teaching-learning process.Experience over the years has shown that teachers have been depending on excessive use of words to express, convey ideas or facts in the teaching-learning process.This process is termed the chalk-talk method.Today advances in technology have made it possible to produce materials and devices that could be used to minimize the teachers talking and at the same time, make the message clearer, more interesting and easier for the learner to assimilate (Onasanya, et al. (2008).According to Soetan et al. (2010), graphics including charts, posters, sketches, cartoons, graphs and drawings.Graphics communicate facts and ideas clearly through combination of drawings, words and pictures.The use of graphics in teaching creates definitiveness to the materials being studied.They help to visualize the whole concepts learned and their relationships with one another.
Hands-on instructional materials show, rather than tell, which increase information retention.A truism often heard in teaching is that if you have not learnt, I have not taught.A reasonable conclusion then is that the importance of instructional materials in teaching and learning science is most efficient illustrated through student achievement result.The Biological Science Curriculum Study (BSCS) (2011) asserts that students come to the science classroom with many misconceptions to correct for proper scientific learning to progress.Schools should base instructional material on fundamental scientific concepts and principles, which help to align students understanding with current knowledge and teach them to monitor and control their own thoughts process to facilitate learning.When science is integrated with other inter-disciplinary courses, the teacher should give careful attention to designing a logical and coherent structure for ensuring that they clearly communicate and ensure contextual understanding of embedded scientific concepts, as recommended by the Long Beach Unified School District(LBUSD)(2010).The LBUSD recommends the use hand-on science activities.The BSCS advocates group work and inquiry based activities.Such explorations encourage students to engage in science, which promote problem-solving thoughts patterns and correct students mistaken notion of science and the world.Teachers who take time to provide instructional materials and option that take into consideration or account the different ways students receive and express knowledge are more likely to see their students success.Science classroom should provide a variety of audio, visual and print input methods depending on students need, allow students the flexibility to communicate their true learning.According to BSCS, students or teachers who must closely follow their 5Es instructional model....engage, explore, explain, elaborate and evaluate...achieve a high rate of success.Taylor, Scotter and Coulson (2007) conclude, that there is a statistical link between superior student achievements and basic or extensive of strategies and learning sequences consistent with the 5Es .Research study has shown that where instructional materials are used the learning environments are highly stimulating and the students appear to take greater interest in learning.
Statement of Problem
The transmission of facts, ideas and information from the teacher to the students in a systematic order or procedure is refereed to as teaching.During this process instructional material other wise known as teaching aids meant to make instruction more meaningful, clear and much more interesting to students are brought in display.There is a general impression that science education is not achieving the desired objectives especially with high incidence of students poor performance in chemistry and other science subjects at senior secondary certificate examination.This situation has assumed a precarious dimension in all secondary schools in Cross River State and particularly in Yakurr Local Government Area.The failure of educational system to provide adequate and appropriate teaching-learning aids in order to improve academic performance of students is of a great concern to government, educational institutions and other concern citizens.It is believed that if adequate instructional materials are made available to school and are used appropriately in teaching-learning process, a better performance could be achieved.Hence, the motivation of this study which seeks to find out the influence of instructional materials on academic performance of senior secondary schools students in chemistry.
Purpose of the Study
The purpose of this study is to: 1. Find out the influence of instructional materials on academic performance of senior secondary school students in chemistry.2. Compare the performance of two sets of students in which one of the groups is taught with instructional materials and the other without instruction materials (Experimental and Control group respectively.)
Research Questions
In the course of this research work, the following question was raised: 1.
To what extent do students taught with instructional materials perform higher than those taught without instructional materials
Hypothesis
H o : There is no statistical significant relationship between the academic performance of chemistry students and the use of instructional materials in Teaching-learning.
Significance of the Study
This study will help to: 1. Steer Government and Proprietors of schools to recognize the need to adequately equip their schools with current and appropriate instructional materials.2. Prove the worth of instruction materials in teaching learning processes.3. Inculcate in teachers the habits of using instructional materials appropriately in teaching learning process to arouse interest and determination among students.
METHOD AND MATERIALS
The researcher adopted a quasiexperimental design for this study.The population consist of the entire senior secondary one in Yakurr Local Government Area of Cross River State.A total of one hundred (100) students were sampled from five secondary schools using random sampling technique and a stratified random sampling technique was used to select the five schools in order to have a true representative sample.The number of boys to girls was in the ratio of 1:1 which reflect women equality.The simple random sampling technique was used in selecting students to avoid prejudice and give room for effective students-materials interaction and adequate class room management.
The researcher prepared two different lesson notes which were used to teach the students.There were two groups; the experimental group was taught with instructional materials but the control group was taught without instructional materials.The same topic Postulates that support the kinetic theory of matter was used for both groups.At the end of the lesson, the researcher administered a Chemistry Academic Test (CAT) to the students in the two groups.The Chemistry Academic Test comprised ten (10) multiple choice items and each question has four options with one correct answer.And each correct answer was scored two marks.The researcher experiences a difference in performance of pre and post test analysis in the two groups.The instruments were first validated by chemistry education experts and the reliability of the Chemistry Academic Test (CAT) was determined using Pearson product moment correlation for split-half to obtain a reliability coefficient of 0.67.
The purpose of this research was explained to the students.In testing the performance of experimental and control groups on a pre-test, pre-test based on the topic to be taught was administered to the students.The pre-test was followed immediately by teaching the two groups of students on the topic Postulate that support the kinetic theory of matter ; one group with instructional materials and the other without instructional materials.The researcher made sure that the students in the two groups passed their scripts after both Tests giving a return rate of 100%.During the Pre-Test and Post-Test examination, the same examination condition were enforced in the two groups.This was done so as to obtain reliable and valid results from the two groups.To assured confidentiality and avoid prejudice, the students were asked not to write their full names.
The scores of students in both pre-test and post-test were transformed into group data and the frequency of students performance computed.The independent t-test statistic was employed to analyse the two groups, (one was taught with instructional materials and the other without instructional materials).The Pearson product Moment Correlation (PPMC) and independent t-test were used for data analyses.The hypothesis and research questions were tested at 0.05 alpha level significance.The mean scores and the standard deviation of the two groups were also computed.
RESULTS: Hypothesis (H o )
There is no statistical significant relationship between the academic performance of chemistry students and the use of instructional materials in Teaching-learning.
The analysis in Table 1 shows that the calculated t-value (5.42) is greater than the critical value (1.98) at 0.05 alpha significant levels.Therefore, the null hypothesis is rejected.This implies that there is a statistical significant relationship between the academic performance of chemistry students and the use of instructional materials in Teaching-learning.
Research Question One
To what extent do students taught with instructional materials perform higher than those taught without instructional materials?
The analysis in Table 1 shows that the calculated r-value (0.61) is greater than the critical value (0.273) at 0.05 alpha significant levels and the standard deviation and mean of students taught with instructional materials are greater than students taught without instructional materials.The result shows that the performance of experimental group is better than the control group.
DISCUSSION OF THE RESULT
The findings in the research hypothesis showed that there is a statistical relationship between the academic performance of chemistry students and the use of instructional materials in Teaching-learning.
The result agrees with the findings of Inyang (1997) that teaching is effective when the teacher make use of instructional materials.(Lance et al, 1999;Todd & Kuklthau, 2004.)Confirmed a significant correlation between the presence and the use of library materials by the students and teachers with better performance.Similarly, (Todd & Kuklthau, 2005, p.82.) found a simple correlation between the students inputs and better academic achievement.Analysis shows that the availability and the use of chalkboard, math kit, teaching guide, science guide, audio-visual aids and the use of science kit have positive impart on the academic performance for science students.The concept of instructional materials revolves on the fact that, it does not only stimulate the learner, but enhances learning outcome generally, increased relationship and recall by involving the relevant senses and makes instruction clear, meaningful and in most cases real.Also Emma & Ajayi , ( 2004) asserted that teaching equipments and materials have change over the years, not only facilitate teaching-learning situation but also address the instructional needs of individuals and groups.Okendu (2012) asserted that regular instructional supervision has a significant bearing on students academic performance.He also, affirmed that adequate supply of instructional resources have significant effect on students academic performance.Onasanya & Omosewo (2011) The results of research question one implied that the performance of experimental group is better and higher than the control group.This is in agreement with the concept that if learning is to be achieved positively then the laboratory should be seen as a workshop for a range of students activities, including experimental investigation to confirmatory exercise and skills learning.The results are in accordance with Inyang (1997) views that students learn faster through activity oriented instruction and when students are not actively involved in the learning process, performance becomes poor.This is not farfetched from the fact that instructional materials are very important in teaching learning process if learning out-come are to be achieved with relative ease.Jimoh, M.F. ( 2009) emphasized that advances in technology have brought instructional materials especially the projected and electronic materials to the forefront as the more radical tools of globalization and social development which have affected class teaching-learning situation positively.Such technological breakthroughs as networked and non-networked projected and non-projected, visual, audio, audio-visual electronic material is important landmark in knowledge transfer and high academic performance.Also, Aguisiobo (1998) expressed that learning is an activity that take place in a contact and not in a vacuum.He reiterated that students with teaching aids do not have a blank mind but a consolidated and developed library of knowledge.Omosewo (2008) ascertained that in a modern science curriculum programme, students need to be encouraged to learn not only through their eyes, or ears, but should be able to use their hands to manipulate apparatus.
CONCLUTION
In this study the aim was to examine the influence of instructional materials (teaching aids) on student s academic performance of senior secondary schools in Chemistry, it is hereby concluded that; 1.
The students taught with instructional materials perform better and higher than those taught without instructional materials 2.
There is a statistical relationship between the academic performance of chemistry students and the use of instructional materials in Teachinglearning
RECOMMENDATIONS
Based on the results of the study the following recommendations are made: 1.
Table 1
Independent t-test and Pearson product moment correlation analysis of students taught with instructional materials those taught without instructional materials
INSTRUCTIONAL MATERIALS ON ACADEMIC PERFORMANCE
Todd, R, and Kuklthau, C., 2004.Students learning through Ohio Schools Libraries: Background, Methodology and Report of Findings, OELMA, Columbus: OH.
|
v3-fos-license
|
2017-08-28T02:33:18.985Z
|
2017-08-21T00:00:00.000
|
697437
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://openorthopaedicsjournal.com/VOLUME/11/PAGE/777/PDF/",
"pdf_hash": "ea66f7b87026836c56f391183bede367a103abcf",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:742",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "ea66f7b87026836c56f391183bede367a103abcf",
"year": 2017
}
|
pes2o/s2orc
|
HIV Infection and Bone Abnormalities
More than 36 million people are living with human immunodeficiency virus (HIV) infection worldwide and 50% of them have access to antiretroviral therapy (ART). While recent advances in HIV therapy have reduced the viral load, restored CD4 T cell counts and decreased opportunistic infections, several bone-related abnormalities such as low bone mineral density (BMD), osteoporosis, osteopenia, osteomalacia and fractures have emerged in HIV-infected individuals. Of all classes of antiretroviral agents, HIV protease inhibitors used in ART combination showed a higher frequency of osteopenia, osteoporosis and low BMD in HIV-infected patients. Although the mechanisms of HIV and/or ART associated bone abnormalities are not known, it is believed that the damage is caused by a complex interaction of T lymphocytes with osteoclasts and osteoblasts, likely influenced by both HIV and ART. In addition, infection of osteoclasts and bone marrow stromal cells by HIV, including HIV Gp120 induced apoptosis of osteoblasts and release of proinflammatory cytokines have been implicated in impairment of bone development and maturation. Several of the newer antiretroviral agents currently used in ART combination, including the widely used tenofovir in different formulations show relative adverse effects on BMD. In this context, switching the HIV-regimen from tenofovir disoproxil fumarate (TDF) to tenofovir alafenamide (TAF) showed improvement in BMD of HIV-infected patients. In addition, inclusion of integrase inhibitor in ART combination is associated with improved BMD in patients. Furthermore, supplementation of vitamin D and calcium with the initiation of ART may mitigate bone loss. Therefore, levels of vitamin D and calcium should be part of the evaluation of HIV-infected patients.
INTRODUCTION
There are about 36.7 million people living with human immunodeficiency virus (HIV) infection worldwide [1]. While 2.1 million people were newly infected with HIV-1 in 2015, which has not changed among adults since 2010, 1.1 million people have died of HIV/AIDS in 2015, showing a decline of 45% since 2005 worldwide [1]. Since the start of the epidemic until the end of 2015, 78 million people have been infected with HIV and 35 million people have died of HIV/AIDS worldwide [1]. More importantly, 18.2 million HIV-infected individuals have access to antiretroviral therapy (ART) by June 2016, which is almost 50% of the HIV-infected population worldwide [1]. In the United States, the Center for Disease Control and Prevention (CDC) estimates that approximately 1.2 million people are living with HIV, and males accounted for 76% of the HIV-infected population, and more than one-half million people have died with HIV/AIDS. New HIV infections in the recent years in the United States have remained relatively stable at around 50,000 per year, whereas HIV-related death rates have declined significantly. Recent advances in HIV therapy (antiretroviral therapy, ART) have reduced the viral load, increased CD4 T cell counts and slowed progression of the HIV disease in many HIV infected-patients and appears to be responsible for a dramatic improvement of these patients lives. However, toxicity and the development of resistance remain the concerns. In addition to side effects of ART, several complications of ART include lipoatrophy, hypercholesterolemia, low HDL, hypertriglyceridemia, insulin resistance, impaired glucose tolerance, cardiovascular disease, lactic acidosis, and bone abnormalities.
Long term infection with HIV and use of ART in HIV-infected patients are associated with several bone related abnormalities such as low bone mineral density (BMD), osteomalacia, osteopenia, osteoporosis, osteonecrosis, fracture and other bone disorders [2 -5]. While bone disorders are multifactorial in nature, nutritional deficiencies such as vitamin D levels and other classical risk factors for bone disorders, including smoking and tobacco use, which are prevalent in HIV-infected individuals, may exacerbate bone related abnormalities in these patients who are on long term ART [6]. A better understanding of the etiology and pathogenesis of bone related disorders in HIV-infected people who are now living longer because of ART may provide useful information that could be included in the treatment strategies of these aging HIV-infected individuals. This article will review the bone conditions and abnormalities associated with HIV infection and use of ART in HIV-infected patients.
HIV INFECTION AND BONE MINERAL DENSITY
Several studies have reported low bone mineral density (BMD) with increased risk of osteoporosis, osteopenia and, osteomalacia in HIV-infected individuals, including men, women, younger and older patients and vertically infected children [7 -16]. A recent study evaluating 58 HIV-infected children and adolescents between the ages of 5.3 to 18.3 years, including 63.3% girls found an increased risk for lower BMD and lower levels of vitamin D serum concentration [17] and the loss in BMD is associated with the levels of vitamin D binding protein [18]. These and other studies [19] suggest that vitamin D levels should be included in the overall evaluation of HIV-infected patients. In addition, HIV infection is associated with poor bone material properties, independent of BMD [20] and low BMD is associated with an increased risk of fracture [21]. The prevalence of osteopenia and osteoporosis independent of ART was evaluated in HIV-infected male patients and found that ART is not a predictor for the risk of osteopenia and osteoporosis in HIVinfected patients [22], suggesting that HIV infection is associated with these bone abnormalities. These studies show an association of HIV infection with bone abnormalities, including low BMD, osteopenia, osteomalacia and osteoporosis in infected patients [23]. However, the mechanisms of HIV induced bone abnormalities are not well understood. We will explore the literature to determine the potential mechanisms of HIV involvement in bone disorders in HIV-infected patients.
While the mechanisms of bone loss or abnormalities during HIV infection are not known, it is hypothesized that the damage may be due to a complex interaction of the T cells with osteoclasts and osteoblasts likely influenced by both HIV and ART [16]. One potential and important mechanism of HIV induced bone disorders could be due to HIV infection of osteoclasts that are derived from monocytes and are resident macrophages in the bone tissue [24]. Since osteoclasts are required for maintenance, repair and remodeling of bones, infection of osteoclasts by HIV leads to its differentiation [24] and most likely contributes to osteolytic disease in HIV-infected patients. Several additional studies have been performed to determine the role of HIV proteins on bone disorders. HIV Gp120, the envelope protein of the virus that binds to CD4 receptor and CCR5 or CXCR5 coreceptors, was shown to induce apoptosis of osteoblasts [25,26], including a significant upregulation of proinflammatory cytokine, TNF-α, and Wnt/β-catenin signaling [26] likely contributing to bone loss. In addition, HIV Gag P-55, the precursor protein for HIV matrix, capsid and nucleocapsid proteins, was also found to decrease the level of osteogenesis in mesenchymal stem cells, seemingly playing a role in reducing BMD [27]. These studies suggest that HIV and its gene products such as envelope Gp120 and Gag p55 play important roles in interfering with bone development and maturation. Furthermore, HIV-infected patients who receive ART, which may further complicate bone related abnormalities. A recent study that included HIV-infected postmenopausal women receiving ART which included the HIV protease inhibitor, ritonavir, showed higher bone turnover markers and increased differentiation of osteoclast-like cells from adherent peripheral blood mononuclear cells with increased risk of bone loss [28]. Another study showed that an HIV protease inhibitor increased the rate of apoptosis and impairment of osteogenic markers in an osteoblast like cell line [29]. These studies suggest that ART may play a role in bone related disorders in HIV-infected individuals, which will be discussed later in this article.
One important investigation could be to determine if HIV infection of bone marrow cells may contribute to bone related disorders. In this context, HIV infection of bone marrow stromal cells has been demonstrated in several studies [30 -32]. In HIV-infected patients, the bone marrow CD34+ progenitor cells were found to have an impaired T cell differentiation due to the production of proinflammatory cytokines [33]. In addition, B cells in HIV-infected patients express higher levels of the receptor activator of NFκ−B ligand (RANKL) and lower levels of osteoprotegerin (OPG) that influence osteoclastic bone resorption [34]. In a recent study evaluating the negative effect on bone acquisition caused by HIV infection early in life, it was found that T cell activation was associated with decreased number of osteogenic precursors and lower bone mass and strength [35]. Taken together, these studies suggest that HIV infection of bone marrow cells may have a role in bone development and reducing bone mineral density, contributing to many abnormalities associated with the bones.
ANTIRETROVIRAL THERAPY AND BONE MINERAL DENSITY
HIV-infected patients are placed on ART as soon as their HIV status is known. However, long term use of ART has been shown to be associated with many disorders, including lipoatrophy, hypercholesterolemia, low HDL, hypertriglyceridemia, insulin resistance, impaired glucose tolerance, cardiovascular disease, lactic acidosis and bone disorders. Several studies have also shown that the frequency of osteoporosis was higher in HIV-infected individuals receiving ART compared with uninfected individuals [36], including a decrease in BMD [37 -43]. Since HIV-infected patients are on ART indefinitely, continuous administration of ART has been shown to be associated with decreased BMD and an increased risk of fractures compared with intermittent CD4 T cell count guided ART [44]. Earlier studies evaluated the effects of ART that included two nucleoside reverse transcriptase inhibitors (NRTI) and one nonnucleoside reverse transcriptase inhibitor (NNRTI) or a protease inhibitor (PI) and found that ART influenced bone turnover [45] and reduction in BMD [46 -48]. In addition, patients receiving ART that included a protease inhibitor showed a higher incidence of osteoporosis than those without a protease inhibitor [49]. This finding was further supported by several studies showing that 50% of HIV-infected patients receiving PI inhibitor had osteopenia, 21% had osteoporosis [10] and 71% of patients had decreased BMD [49 -51]. In HIV-infected women, HIV infection was associated with lower BMD independent of other known risk factors for decreased BMD [52]. In addition, protease inhibitor-containing ART and particularly longer use of lopinavir were associated with lower BMD, whereas use of efavirenz was associated with higher BMD [52].
There are more than thirty antiretroviral agents from different classes approved by FDA available for use in combination for HIV regimen in patients. Several studies have evaluated the relative effects of these antiretroviral agents on bone abnormalities in infected patients. In a study performed on HIV-infected patients in Korea, osteoporosis was associated with both abacavir-and zidovudine-based HIV regimen, however, zidovudine associated osteoporosis was seen mainly after 1 year of treatment, whereas abacavir had adverse osteological effects in less than 1 year [52]. Recent studies addressing the BMD issues in HIV-infected women from Sub-Saharan Africa found that HIV-infected women on ART had a 2-3% decrease in their BMD [53]. Contrary to this, some studies have found that there was no reduction or difference in BMD in those HIV-infected patients who were treated with ART [54,55].
A meta-analysis of 2,210 patients suggested that changing the HIV-regimen to tenofovir disoproxil fumarate (TDF) is associated with reduction in BMD [56] and switching from TDF to tenofovir alafenamide (TAF) led to improved BMD [57,58]. A recent two double blind phase 3 trial of 1,733 ART naïve HIV-infected individuals found that a TAF based regimen had better virologic efficacy and less impact on BMD compared with a TDF based regimen formulated with similar antiretroviral agents [59]. In a comparative analysis of HIV-infected antiretroviral treatment-naïve African American patients, it was found that the ART combination of efavirenz, emtricitabine, and TDF was associated with reduction in BMD and maintained the levels of vitamin D compared with the ART combination containing PI, raltegravir, darunavir, and ritonavir [60]. In addition, vitamin D levels are also associated with lower BMD in those who receive efavirenz or lopinavir/ritonavir [61]. In another comparative analysis of specific ART, it was found that TDF/emtricitabine plus atazanavir/ritonavir, darunavir/ritonavir (DRV/r), or raltegravir were associated with bone loss, but the lowest was with raltegravir [62]. Furthermore, supplementation with vitamin D and calcium with the initiation of ART efavirenz/emtricitabine/TDF may mitigate bone loss in HIV-infected patients [63]. In AIDS Clinical Trials Group A5303 study, it was found that maravioroc (CCR5 inhibitor) containing ART was associated with less bone loss at hip and lumbar spine than tenofovir disoproxil fumarate containing ART, suggesting that maravioroc may be an option in ART to reduce bone loss [64]. Now the question is that how should we follow HIV-infected patients on ART with respect to their loss of BMD. In general, dual-energy X-ray absorptiometry (DEXA) is used to assess BMD. Some studies suggest that infected patients should be followed for serum bone specific alkaline phosphatase and urinary N-terminal telopeptide [65] along with levels of osteoprotegerin [66], which may be predictive markers for loss of BMD and development of osteoporosis. Some studies recommend that instead of using DEXA, quantitative ultrasound (QUS) can be used to assess BMD initially in HIV-infected patients to avoid unnecessary radiation exposure from DEXA because many patients may not benefit from it [67,68].
CONCLUSION
Recent advances in HIV therapy have improved the quality and longevity of HIV-infected patients' lives due to suppression of viral load, improvement of CD4 T cell counts and reduction to almost elimination of opportunistic infection in developed countries, especially United States. Moreover, more than 50% of HIV infected people worldwide now have access to ART. However, HIV-infected patients receiving ART are experiencing many complications, including bone disorders. HIV infected people experience low BMD and are at increased risk for osteopenia, osteoporosis, osteomalacia and fractures. Furthermore, use of antiretroviral agents further exacerbate these bone abnormalities. Current recommendation for ART combination includes at least three drugs from two different classes of antiretroviral agents; nucleoside reverse transcriptase inhibitors, non-nucleoside reverse transcriptase inhibitors, integrase inhibitors and protease inhibitors. Of all these classes of antiretroviral agents, protease inhibitors showed a higher frequency of osteoporosis, osteopenia and low BMD. Several of the newer ART agents are also associated with low BMD and other bone abnormalities. However, switching the HIV-regimen from tenofovir disoproxil fumarate (TDF) to Tenofovir alafenamide (TAF) showed improvement in BMD of HIV-infected people. Also, use of integrase inhibitor in ART combination improved BMD in patients. It was found useful to supplement vitamin D and calcium with ART to reduce bone loss in HIV-infected patients.
CONSENT FOR PUBLICATION
Not applicable.
|
v3-fos-license
|
2021-10-18T18:00:07.241Z
|
2021-09-29T00:00:00.000
|
244253404
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2075-163X/11/10/1067/pdf?version=1634731126",
"pdf_hash": "9b66b996224bfeb35c32c1d80d697b6895d40098",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:744",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "0a69983f70885bfd7c530173a18d79af09ccb70c",
"year": 2021
}
|
pes2o/s2orc
|
Influence of Temperature on Rising Bubble Dynamics in Water and n -pentanol Solutions
: Data in the literature on the influence of water temperature on the terminal velocity of a single rising bubble are highly contradictory. Different variations in bubble velocity with temperature are reported even for potentially pure systems. This paper presents a systematic study on the influence of temperature between 5 ◦ C and 45 ◦ C on the motion of a single bubble of practically constant size (equivalent radius 0.74 ± 0.01 mm) rising in a clean water and n -pentanol solution of different concentrations. The bubble velocity was measured by a camera, an ultrasonic sensor reproduced in numerical simulations. Results obtained by image analysis (camera) were compared to the data measured by an ultrasonic sensor to reveal the similar scientific potential of the latter. It is shown that temperature has a significant effect on the velocity of the rising bubble. In pure liquid, this effect is caused only by modifying the physicochemical properties of the water phase, not by changing the hydrodynamic boundary conditions at the bubble surface. In the case of the solutions with surface-active substances, the temperature-change kinetics of the dynamic adsorption layer formation facilitate the immobilization of the liquid/gas interface.
Introduction
The hydrodynamics of a single bubble are a crucial matter for such engineering and environmental applications as froth flotation, foam fractionation, waste treatment, oil recovery, pulp and paper, distillation, the aeration of water reservoirs and pipe flow (cavitation) [1][2][3][4]. Moreover, bubble motion is important for the design of bubble columns and reactors, where the motion is strictly correlated to mass transfer rates [5]. Furthermore, the description of bubble motion in solutions of surface-active substances (SAS) is used to determine the evolution and development of the dynamic adsorption layer (DAL) [6], the properties of which are essential for predicting real foam stability [7].
The current state of the subject in the literature consists of a vast number of reports showing the impact of bubble size and shape [8,9], surface tension [10], density, viscosity in both phases [11][12][13] and the type of surfactant [14][15][16][17] on single-bubble motion characteristics. Surprisingly, reports on the influence of temperature on the velocity of rising bubbles, even in pure liquids, are quite scarce, despite the fact that this effect has significance for engineering and industrial applications. Moreover, they show considerable contradictory data and trends. Leifer [18] showed that for clean bubbles in water at different temperatures an increase from 0 to 40 • C caused a decrease in the rising velocity, the magnitude of which was influenced by the bubble diameter. Okawa et al. [19] considered the temperature effect on single bubble rise characteristic in distilled water, but this work was focused mostly on a comparison between the influence of the temperature on bubble path oscillations and the method of bubble formation. Only two temperature values, low (15 • C) and high (90 • C) were studied, and in the majority of cases the terminal velocities differed significantly from the theoretical predictions, assuming slip boundary conditions at the liquid/gas interface. Zhang et al. [20] determined the bubble rise velocity profiles in tap water and
Methods
Variations in the local velocities of a single bubble rising in an aqueous phase of different physicochemical properties (tuned by a temperature modification), according to Table 1) were determined using digital camera observations coupled with image analysis and ultrasonic sensor data. The set-ups used for both experimental approaches are schematically illustrated in Figure 1, and, in both cases, the main parts were identical: a square glass column (40 mm × 40 mm × 400 mm) with a thick-walled glass capillary (inner diameter d c = 0.0753 mm) sealed at the bottom and an automatic bubble generator (Bubble-on-Demand [28]) to form a single bubble with control over its detachment frequency (adjusted to 60 s). Moreover, in both experimental approaches, the column with the tested liquid was placed and sealed inside the larger, outer square glass column (60 mm × 60 mm × 400 mm) to maintain the liquid's temperature in the inner column at the desired level. Before each experimental series, the temperature was adjusted using a circulating water bath (Thermo Scientific SC100-A10, Waltham, MA, USA), and this process was controlled by an electronic thermometer immersed in the inner column liquid. [30].
It has to be added here that, for experiments in pure water, only the period of rectilinear bubble motion was analyzed. It was observed during the experiments that, after a given distance, the bubble path deviated from a straight line. Moreover, the distance at which this deviation was noticed was generally shorter for higher water temperature. This distance, however, was much larger than needed for the bubble to reach terminal velocity, but an analysis of a temperature-dependent bubble path was beyond the of scope of this paper. For n-pentanol, establishment of terminal velocity strictly depended on the kinetics of the dynamic adsorption layer (DAL) formation. For this particular reason, longer distances covered by the bubble were analyzed, and for a particular pentanol solution concentration, the terminal velocity was calculated from the period where the bubble's oscillatory motion was observed.
Velocity Determination by Camera and Image Analysis
Details on the experimental protocol and image analysis algorithms used for bubble velocities determination by visual observations can be found elsewhere [10,30,31]. Briefly, in this method, the local bubble velocity could be calculated from analyzing the bubble photos recorded by a CCD camera at equal time intervals. In our case, bubble motion was recorded by the SpeedCam MacroVis (Ettlingen, Germany) at 100 fps. The frame-by-frame It has to be added here that, for experiments in pure water, only the period of rectilinear bubble motion was analyzed. It was observed during the experiments that, after a given distance, the bubble path deviated from a straight line. Moreover, the distance at which this deviation was noticed was generally shorter for higher water temperature. This distance, however, was much larger than needed for the bubble to reach terminal velocity, but an analysis of a temperature-dependent bubble path was beyond the of scope of this paper. For n-pentanol, establishment of terminal velocity strictly depended on the kinetics of the dynamic adsorption layer (DAL) formation. For this particular reason, longer distances covered by the bubble were analyzed, and for a particular pentanol solution concentration, the terminal velocity was calculated from the period where the bubble's oscillatory motion was observed.
Velocity Determination by Camera and Image Analysis
Details on the experimental protocol and image analysis algorithms used for bubble velocities determination by visual observations can be found elsewhere [10,30,31]. Briefly, in this method, the local bubble velocity could be calculated from analyzing the bubble photos recorded by a CCD camera at equal time intervals. In our case, bubble motion was recorded by the SpeedCam MacroVis (Ettlingen, Germany) at 100 fps. The frame-by-frame analysis of the collected movies was automatized by an in-house-written Python script are the vertical and horizontal coordinates of a subsequent position of the rising bubble geometrical center within a time interval that matched the camera frequency. For experiments in pure water, the significance of the vertical coordinates' constituents was negligible. Furthermore, with pictures of the rising bubble, the so-called equivalent bubble diameter (d eq ) and ratio of the bubble deformation (χ) were calculated as: where d v and d h are the bubble's vertical and horizontal diameters, respectively. These parameters were used further to analyze the hydrodynamics of the rising bubble under different temperature conditions.
Velocity Determination by Ultrasound
In this approach the ultrasonic sensor mounted on the bottom of the liquid column, transmitted and received at 5 MHz. The bubble rising velocity was determined analyzing the variations in the temporal evolution of a position of the registered signal formed as a result of ultrasonic waves reflected from the rising bubble surface. An example of the signal as a function of distance of the bubble from the capillary is presented in Figure 2. The parameters of the sensor and the time dependent signal position were controlled and recorded by the driver (OPBOX 2.0 mini ultrasonic box), and the software was elaborated by PBP OPTEL (Wrocław, Poland) [32]. The bubble position of the maximum signal value was acquired in constant time intervals of 87.8 ms. The values of the local bubble velocities were calculated by differentiating the temporal evolution of the signal position. For each of the selected temperatures (see Table 1), the velocity as a function of time was measured independently for 10 subsequent single bubbles. It is worth highlighting that, for an accurate determination of the signal temporal evolution, the information about the speed of sound in the liquid phase was necessary. Its values, presented in Table 2, were temperature d-pendent and taken directly from the engineering tables [29].
where = (( +1 − −1 ) 2 + (( +1 − −1 ) 2 , while ( +1 − −1 ) and ( +1 − −1 ) are the vertical and horizontal coordinates of a subsequent position of the rising bubble geometrical center within a time interval that matched the camera frequency. For experiments in pure water, the significance of the vertical coordinates' constituents was negligible. Furthermore, with pictures of the rising bubble, the so-called equivalent bubble diameter (deq) and ratio of the bubble deformation () were calculated as: where and ℎ are the bubble's vertical and horizontal diameters, respectively. These parameters were used further to analyze the hydrodynamics of the rising bubble under different temperature conditions.
Velocity Determination by Ultrasound
In this approach the ultrasonic sensor mounted on the bottom of the liquid column, transmitted and received at 5 MHz. The bubble rising velocity was determined analyzing the variations in the temporal evolution of a position of the registered signal formed as a result of ultrasonic waves reflected from the rising bubble surface. An example of the signal as a function of distance of the bubble from the capillary is presented in Figure 2. The parameters of the sensor and the time dependent signal position were controlled and recorded by the driver (OPBOX 2.0 mini ultrasonic box), and the software was elaborated by PBP OPTEL (Wrocław, Poland) [32]. The bubble position of the maximum signal value was acquired in constant time intervals of 87.8 ms. The values of the local bubble velocities were calculated by differentiating the temporal evolution of the signal position. For each of the selected temperatures (see Table 1), the velocity as a function of time was measured independently for 10 subsequent single bubbles. It is worth highlighting that, for an accurate determination of the signal temporal evolution, the information about the speed of sound in the liquid phase was necessary. Its values, presented in Table 2, were temperature d-pendent and taken directly from the engineering tables [29].
Numerical Calculations
Modelling of rectilinear bubble motion in liquid of properties of water under different temperature conditions (according to Table 1) was performed using spatial discretization and numerical scheme implemented in a Gerris Flow Solver (release 6 December 2013), which is described in detail elsewhere [33][34][35].
The numerical algorithms of Gerris were used to solve the governing equations describing the conservation of momentum and mass of an incompressible liquid in the form: where Q is a strain rate tensor; u is the fluid velocity vector; ρ is the fluid density and µ is its viscosity; p is pressure; t is time; σ is surface tension; δ s is a Dirac distribution function (expressing the fact that the surface tension term was concentrated at the interface); κ and n are the curvature and normal unit vector to the interface, respectively [33]. The liquid column of height H = 150 mm and radius L, containing a gas bubble of radius R b = 0.745 mm, was described by an axisymmetrical cylindrical coordinate system. The chosen value of L was directly related to the numerical (adaptive) grid size, as discussed by Popinet [33] and Zawala [36], and was adjusted for results convergence. It was found that, to obtain the converged data, the L had to be at least 10 mm, which corresponded to the minimum size of the numerical grid cell equal to 4.9 µm. This was consistent with the results of similar calculations presented by Zawala [36]. Initially, at t = 0, the center of the motionless spherical bubble was set 3 mm above the bottom of the liquid column at the symmetry axis (x = 0). After acceleration, constant speed (terminal velocity) of the bubble was established after t = 0.10 s. The bubble motion parameters were calculated for the time period t = 0.14-0.16 s. The liquid density, viscosity and surface tension were taken from Table 1 to mimic the bubble rise in the aqueous phase of different temperatures. A comparison between experimentally obtained photos of the rising bubble under steady-state conditions and the corresponding numerically reproduced bubble outlines is presented in Figure 3. A very good qualitative agreement between these sets of data was found. The quantitative analysis of the data is presented further in the paper.
Numerical Calculations
Modelling of rectilinear bubble motion in liquid of properties of water under different temperature conditions (according to Table 1) was performed using spatial discretization and numerical scheme implemented in a Gerris Flow Solver (release 6 December 2013), which is described in detail elsewhere [33][34][35]. Table 2. Speed of sound used to determine the rising bubble using an ultrasonic sensor (taken from Eng. Toolbox [29] The numerical algorithms of Gerris were used to solve the governing equations describing the conservation of momentum and mass of an incompressible liquid in the form: where Q is a strain rate tensor; u is the fluid velocity vector; ρ is the fluid density and μ is its viscosity; p is pressure; t is time; σ is surface tension; δs is a Dirac distribution function (expressing the fact that the surface tension term was concentrated at the interface); κ and n are the curvature and normal unit vector to the interface, respectively [33]. The liquid column of height H = 150 mm and radius L, containing a gas bubble of radius Rb = 0.745 mm, was described by an axisymmetrical cylindrical coordinate system. The chosen value of L was directly related to the numerical (adaptive) grid size, as discussed by Popinet [33] and Zawala [36], and was adjusted for results convergence. It was found that, to obtain the converged data, the L had to be at least 10 mm, which corresponded to the minimum size of the numerical grid cell equal to 4.9 µ m. This was consistent with the results of similar calculations presented by Zawala [36]. Initially, at t = 0, the center of the motionless spherical bubble was set 3 mm above the bottom of the liquid column at the symmetry axis (x = 0). After acceleration, constant speed (terminal velocity) of the bubble was established after t = 0.10 s. The bubble motion parameters were calculated for the time period t = 0.14-0.16 s. The liquid density, viscosity and surface tension were taken from Table 1 to mimic the bubble rise in the aqueous phase of different temperatures. A comparison between experimentally obtained photos of the rising bubble under steady-state conditions and the corresponding numerically reproduced bubble outlines is presented in Figure 3. A very good qualitative agreement between these sets of data was found. The quantitative analysis of the data is presented further in the paper.
Bubble Rising in Pure Water
Values of the bubble radius (R b = d eq /2), calculated from the camera registered rising bubble photos are presented in Figure 4 (d c = 0.0753 mm). In addition, the values reported by Zawala and Niecikowska [30] acquired for bubbles formed at capillaries of various d c but a constant temperature T = 21 ± 1 • C were given for comparison. The solid line represents the theoretical size of the bubble detaching from the capillary, which can be calculated by balancing the buoyant (detaching) force: and capillary (attachment) force: where V b is the bubble volume; ∆ρ is the density difference between the liquid (ρ l ) and gas (ρ g ) phases; σ is the surface tension; θ is the contact angle (equal to 0 for a clean glass capillary surface); g is the gravitational constant. At the moment of bubble detachment, F b equaled F c , and this relation could be rearranged to give an equation known as Tate's law [20,37]: ported by Zawala and Niecikowska [30] acquired for bubbles formed at capillaries of various dc but a constant temperature T = 21 ± 1 C were given for comparison. The solid line represents the theoretical size of the bubble detaching from the capillary, which can be calculated by balancing the buoyant (detaching) force: and capillary (attachment) force: where is the bubble volume; ∆ is the density difference between the liquid ( ) and gas ( ) phases; is the surface tension; is the contact angle (equal to 0 for a clean glass capillary surface); g is the gravitational constant. At the moment of bubble detachment, Fb equaled Fc, and this relation could be rearranged to give an equation known as Tate's law [20,37]: As seen in Figure 4A, a very good agreement between the experimental data and theoretical predictions of Equation (9) for water at T = 21 ± 1 C was obtained [30]. The Rb values measured in water of different T were also consistent with the predictions; nevertheless, slight deviations from the theoretical line could be observed, caused by variations in the water physicochemical parameters, especially surface tension values. Figure 4B presents a comparison of the Rb as a function of water surface tension (Table 1), and a quite good match between experimental and theoretical values was found. This proved that the bubble was generated (by the elaborated BoD generator [30]) under conditions that allowed the establishment of an equilibrium between and , so the bubble Rb could also be considered at equilibrium. It was seen that a decrease in the value caused by the water temperature increased from 5 C to 45 C, resulting in only a slight variation in the Rb (from 0.757 0.005 to 0.734 0.005). As seen in Figure 4A, a very good agreement between the experimental data and theoretical predictions of Equation (9) for water at T = 21 ± 1 • C was obtained [30]. The R b values measured in water of different T were also consistent with the predictions; nevertheless, slight deviations from the theoretical line could be observed, caused by variations in the water physicochemical parameters, especially surface tension values. Figure 4B presents a comparison of the R b as a function of water surface tension (Table 1), and a quite good match between experimental and theoretical values was found. This proved that the bubble was generated (by the elaborated BoD generator [30]) under conditions that allowed the establishment of an equilibrium between F b and F c , so the bubble R b could also be considered at equilibrium. It was seen that a decrease in the σ value caused by the water temperature increased from 5 • C to 45 • C, resulting in only a slight variation in the R b (from 0.757 ± 0.005 to 0.734 ± 0.005).
A comparison of the terminal bubble velocities (u t ) is presented in Figure 5. The terminal velocity was shown as a function of the temperature. For T = 20 • C, the value from Zawala and Niecikowska's [30] paper was used. The dashed lines in Figure 5 are second-order polynomials fitted separately to each of the datasets. The solid line is the average polynomial fit. It was evident that the bubble velocity measured by the ultrasonic sensor (u s ) was higher than for the image analysis (u c ). However, the fitted dashed lines indicated that the relative difference between the camera and ultrasonic datasets was similar, so it was caused by a systematic rather than a random factor. It can be presumed that this difference was probably caused by assumptions made on the sound wave speed in the water phase and the different temperature values for which were taken directly from the engineering tables (see Table 2). The difference could have been caused, for example, by wave interference with the column walls. As seen in the inset in Figure 5, the difference between data obtained by both techniques, quantified by the u s /u c ratio, was of order of 2-5%. The average second-order polynomial fit, which accurately described the trend of terminal velocity variations (in cm/s) with temperature (expressed in • C) within the range (solid line in Figure 5) was given as: For CFD data, an agreement with the experimental results decreased with an increasing temperature. This effect was a consequence of an increasing bubble deformation (see Figure 3), i.e., the increase in the bubble d h caused an increase in the drag force resulting from column wall proximity, which could be associated with the so-called wall effect). As seen, both for the ultrasonic and camera methods, the standard deviation values for average terminal velocity were quite small, indicating a good reproducibility. It should be highlighted, however, that, for the camera method, the terminal velocity was calculated from only one experimental run. The ultrasonic sensor, because of its simplicity and swiftness of measurement, allowed for multiple measurements of a bubble velocity profile, which increased the statistical soundness of the terminal velocity values. A comparison of the terminal bubble velocities (ut) is presented in Figure 5. The terminal velocity was shown as a function of the temperature. For T = 20 C, the value from Zawala and Niecikowska's [30] paper was used. The dashed lines in Figure 5 are secondorder polynomials fitted separately to each of the datasets. The solid line is the average polynomial fit. It was evident that the bubble velocity measured by the ultrasonic sensor (us) was higher than for the image analysis (uc). However, the fitted dashed lines indicated that the relative difference between the camera and ultrasonic datasets was similar, so it was caused by a systematic rather than a random factor. It can be presumed that this difference was probably caused by assumptions made on the sound wave speed in the water phase and the different temperature values for which were taken directly from the engineering tables (see Table 2). The difference could have been caused, for example, by wave interference with the column walls. As seen in the inset in Figure 5, the difference between data obtained by both techniques, quantified by the us/uc ratio, was of order of 2-5%. The average second-order polynomial fit, which accurately described the trend of terminal velocity variations (in cm/s) with temperature (expressed in C) within the range (solid line in Figure 5) was given as: For CFD data, an agreement with the experimental results decreased with an increasing temperature. This effect was a consequence of an increasing bubble deformation (see Figure 3), i.e., the increase in the bubble dh caused an increase in the drag force resulting from column wall proximity, which could be associated with the so-called wall effect). As seen, both for the ultrasonic and camera methods, the standard deviation values for average terminal velocity were quite small, indicating a good reproducibility. It should be highlighted, however, that, for the camera method, the terminal velocity was calculated from only one experimental run. The ultrasonic sensor, because of its simplicity and swiftness of measurement, allowed for multiple measurements of a bubble velocity profile, which increased the statistical soundness of the terminal velocity values. Table 1 for details), determined using ultrasonic and camera techniques. Table 1 for details), determined using ultrasonic and camera techniques.
Usually, to characterize the bubble dynamics in liquids, various dimensionless numbers are used to allow correlation and comparison between variations in the bubble motion parameters and shape pulsations under different physicochemical conditions. This helps to determine the useful general expressions and dependencies, which could be extended for other systems with comparable bubble shape changes and flow regimes. In our case, for a description of bubble dynamics, the deformation ratio χ (determined on the basis of image analysis) and the rising velocities measured by the two different techniques under different physical conditions (see Table 1), were described using Reynolds (Re) and Weber's (We) numbers, which allowed a direct comparison with the relations in the models in the literature. In addition, this comparison was used to assess the reliability of the ultrasonic method for determining the bubble dynamics in the aqueous phase. The Re and We were calculated as: We = d eq ρ l u t 2 σ (12) Figure 6A presents experimentally determined χ values as a function of the Weber number calculated for experimental data by Zawala and Niecikowska [30] and for the data obtained in our studies under various temperatures. Moreover, the data from the numerical calculations were given for comparison. In addition, the empirical relation by Legendre et al. [38] in the form: was plotted in Figure 6A as a solid line. Quite a good agreement between the data and the relation given by Equation (9) was found. Again, the most significant difference was registered for the ultrasonic method. This was a consequence of the above-mentioned difference in the u t values. Nevertheless, it can be assumed that, in the R b range, the variations in the bubble χ vs. We were reasonably described by the Legendre relation [38]. Usually, to characterize the bubble dynamics in liquids, various dimensionless numbers are used to allow correlation and comparison between variations in the bubble motion parameters and shape pulsations under different physicochemical conditions. This helps to determine the useful general expressions and dependencies, which could be extended for other systems with comparable bubble shape changes and flow regimes. In our case, for a description of bubble dynamics, the deformation ratio (determined on the basis of image analysis) and the rising velocities measured by the two different techniques under different physical conditions (see Table 1), were described using Reynolds (Re) and Weber's (We) numbers, which allowed a direct comparison with the relations in the models in the literature. In addition, this comparison was used to assess the reliability of the ultrasonic method for determining the bubble dynamics in the aqueous phase. The Re and We were calculated as: Figure 6A presents experimentally determined values as a function of the Weber number calculated for experimental data by Zawala and Niecikowska [30] and for the data obtained in our studies under various temperatures. Moreover, the data from the numerical calculations were given for comparison. In addition, the empirical relation by Legendre et al. [38] in the form: was plotted in Figure 6A as a solid line. Quite a good agreement between the data and the relation given by Equation (9) was found. Again, the most significant difference was registered for the ultrasonic method. This was a consequence of the above-mentioned difference in the ut values. Nevertheless, it can be assumed that, in the Rb range, the variations in the bubble vs. We were reasonably described by the Legendre relation [38]. The dependence of Re on We is given in Figure 6B. Here, it was possible to compare the data with the literature results by Pawliszak et al. [23] (experiments at room temperature) and the theoretical predictions reported by Manica et al. [13,39], which allowed the The dependence of Re on We is given in Figure 6B. Here, it was possible to compare the data with the literature results by Pawliszak et al. [23] (experiments at room temperature) and the theoretical predictions reported by Manica et al. [13,39], which allowed the calculation of terminal velocities for rising bubbles of different shapes, assuming a slip hydrodynamic boundary condition at the liquid/gas interface (i.e., when there was no adsorption layer at the bubble surface). As was seen, the agreement of the different sets of literature data, i.e., the bubble velocities determined at the room temperature (21 ± 1 • C), was almost perfect. This was, however, not the case for the u t determined for various T, where a completely different trend was revealed. Intuitively, it could have been expected that this new trend would have been caused not by a modification of the bubble hydrodynamic boundary conditions, but by the liquid physicochemical parameters only. To show the correctness of this claim, the results presented in Figure 6B were analyzed according to the model by Moore, allowing a direct calculation of the bubble drag coefficient (C D ). For this purpose, a common relation between We and Re (necessary for further calculations) was quantified. For experiments at room temperature (literature data) the relation between Re and We was almost linear and was approximated (in the considered R b range) by: Re = 185.90We + 66.88 (14) while for various temperature conditions by the Equation (see solid green line in Figure 6B): To calculate the theoretical drag coefficient associated with the rise of the deformed bubbles in water (clean liquid/gas interface) at various temperatures, the relation elaborated by Moore [38], which is confined to a thin viscous sublayer according to his theory of viscous flow around the bubble, was used: where G(χ) and H(χ) are geometrical factors calculated by Moore [40], which were accurately approximated by the equations given by Loth [41] and Rastello et al. [42]: To calculate the values as a function of Re, the empirical relations between χ and We (Equation (13)) as well as Re and We (Equations (14) and (15)) were used. The drag coefficient of the experimentally observed bubbles was calculated from the general expression for the drag force (F d ) acting on the object moving in a liquid phase: (19) where A is the object projected area (for the spherical bubble equal to πR b 2 ). Under steadystate conditions, when the rising velocity was constant (terminal), the F d = F b . After rearrangement, assuming that for the rising bubble the ∆ρ ρ l , the C D was calculated using experimentally determined R b and u t values, as Figure 7 presents the determined C D as a function of Re, calculated using Equations (11)- (20). In addition, the values of the drag coefficient of a particle with no-slip hydrodynamic boundary conditions [43] in the form: were also plotted. As could be expected, the Moore model very accurately described the literature data, obtained at room temperature in pure water. It was seen, moreover, that, after considering the temperature effect by means of Equations (14) and (15), the experimental data (determined both by ultrasonic and camera techniques) were also very well described. It showed evidence that, under various temperatures of pure water, the hydrodynamic boundary conditions of bubbles of various sizes remained unchanged and could be assumed as fully slip.
determination was reliable but not as accurate as visual observations because it depended on an arbitrarily chosen speed of the sound value, which had to be used during the velocity analysis. Moreover, it did not allow for the determination of the bubble deformation ratio. Nevertheless, the ultrasonic method was significantly faster and gave a much better level of statistical confidence in a remarkably reduced time. In our opinion, it can be successfully used as a reliable tool for single bubble velocity measurements, especially in opaque or turbid solutions where camera observations were difficult or impossible. In addition, experiments on the bubble motion in water of different temperatures allowed for the determination of the useful relations between the dimensionless numbers and the T values. These relations, which are presented in Figure 8, could be expressed as: Moreover, the above analysis showed that the ultrasonic method of bubble velocity determination was reliable but not as accurate as visual observations because it depended on an arbitrarily chosen speed of the sound value, which had to be used during the velocity analysis. Moreover, it did not allow for the determination of the bubble deformation ratio. Nevertheless, the ultrasonic method was significantly faster and gave a much better level of statistical confidence in a remarkably reduced time. In our opinion, it can be successfully used as a reliable tool for single bubble velocity measurements, especially in opaque or turbid solutions where camera observations were difficult or impossible.
In addition, experiments on the bubble motion in water of different temperatures allowed for the determination of the useful relations between the dimensionless numbers and the T values. These relations, which are presented in Figure 8, could be expressed as: We(T) = −3.18 · 10 −4 · T 2 + 0.043 · T + 1.775 (22) Re ( All the empirically determined relations between the various parameters during the period of rectilinear bubble rising under a steady-state condition in water of different temperatures are shown in Table 3. We believe that these relations could also be used for All the empirically determined relations between the various parameters during the period of rectilinear bubble rising under a steady-state condition in water of different temperatures are shown in Table 3. We believe that these relations could also be used for different bubble shapes and sizes under rectilinear motion. Table 3. Empirical relations between various parameters useful for the description of bubble dynamics in water of different temperature (for 200 < Re < 1000).
Analysis of the Local Velocity Profiles in Different Temperatures
Profiles of the local bubble velocity (i.e., velocity variations as a function of the distance covered by the bubble in various concentrations of n-pentanol solutions) are presented in Figure 9. The data redrawn from Zawala et al. [44] were compared with corresponding profiles taken by the ultrasonic sensor. The literature data were obtained using the classical camera technique and manual frame-by-frame image analysis [44].
Minerals 2021, 11, x FOR PEER REVIEW 12 of 17 Figure 9. Comparison of the bubble local velocity profiles obtained on the basis of camera and image analysis approach (data redrawn from [44]) and using ultrasonic technique.
Despite the slightly different temperatures of the solutions (our measurements were performed in 25 C, while the literature results were reported at 21 C), quite a good agreement between the two sets of data was seen. All characteristic bubble velocity changes, including the maximum deceleration and the moment of the terminal velocity establishment, were accurately captured. It is well established that these characteristic velocity variations can serve as fingerprints for the dynamic behavior of the adsorption/desorption processes at the solution/air interface [31]; in other words, they can be used to track the development and stages of the formation of the so-called dynamic adsorption layer (DAL). For example, the maximum bubble velocity was an indication that the DAL did not yet form but just started [45]. The terminal velocity establishment meant that the DAL was fully formed; that is, there was an uneven distribution of surfactant molecules, with Figure 9. Comparison of the bubble local velocity profiles obtained on the basis of camera and image analysis approach (data redrawn from [44]) and using ultrasonic technique.
Despite the slightly different temperatures of the solutions (our measurements were performed in 25 • C, while the literature results were reported at 21 • C), quite a good agreement between the two sets of data was seen. All characteristic bubble velocity changes, including the maximum deceleration and the moment of the terminal velocity establishment, were accurately captured. It is well established that these characteristic velocity variations can serve as fingerprints for the dynamic behavior of the adsorption/desorption processes at the solution/air interface [31]; in other words, they can be used to track the development and stages of the formation of the so-called dynamic adsorption layer (DAL). For example, the maximum bubble velocity was an indication that the DAL did not yet form but just started [45]. The terminal velocity establishment meant that the DAL was fully formed; that is, there was an uneven distribution of surfactant molecules, with a depletion zone at the bubble top pole [6,44,45]. As seen in Figure 9, the ultrasonic method can be used as a complementary tool for these purposes. As already mentioned, a main advantage was its speed-there was no need for a time-consuming image analysis step. On the other hand, ultrasonic measurements did not provide any information about bubble size and deformation or the evolution of bubble shape with time or distance. As was shown by Krzan et al. [45], this is an additional important parameter that can be used to analyze the DAL formation at moving liquid/gas interfaces.
To elucidate the influence of the temperature on kinetics of the DAL formation, each bubble velocity profile, taken in the n-pentanol solution of considered temperature (Table 1) was normalized according to the maximum velocity value (u max ). The u max values for chosen n-pentanol concentration are presented in Table 4. As seen, the bubble maximum velocity increased with the temperature-this result was consistent with the reports by Zhang et al. [20], who observed a similar trend in Triton X-100 solution of concentration 1.25 × 10 −4 mol/m 3 . Figure 10 As seen for 1 × 10 −4 M and 1.5 × 10 −3 M, the effect of increasing the solution temperature was similar to that of increasing the solution concentration (compare with the data in Figure 9). It was especially pronounced for 1.5 × 10 −3 M, where the terminal velocity de- As seen for 1 × 10 −4 M and 1.5 × 10 −3 M, the effect of increasing the solution temperature was similar to that of increasing the solution concentration (compare with the data in Figure 9). It was especially pronounced for 1.5 × 10 −3 M, where the terminal velocity decreased as the temperature increased and, in addition, the moment of its establishment shifted slightly towards shorter distances (i.e., the DAL was established a little bit faster). The explanation of this effect was rather obvious: a higher temperature meant a higher bubble velocity and a simultaneous increase in the rate of convective diffusion transport of the n-pentanol molecules to the rising bubble surface. Similar trends were shown in the solution of Triton X-100 by Zhang et al. [20] 3.2.2. Analysis of Terminal Velocity (at a Distance of 200 mm) The effect of temperature on the terminal velocities was further analyzed according to the empirical equation developed by Kowalczuk et al. [17]: where u w is the bubble velocity in water (maximum possible); u min is the minimum velocity of the bubble (with fully immobilized interface); c is the surface-active substance bulk concentration; CMV is the so-called concentration at minimum velocity. As was discussed elsewhere [14][15][16], the CMV can be used as a very useful tool for characterizing the kinetics of surfactant adsorption at the rising bubble interface (the kinetics of bubble surface immobilization), solution foaming properties and a comparison of these factors for different types of surface-active substances. Figure 11A presents the u t values for bubble velocity at 200 mm. In the great majority of experiments, this distance was enough to establish terminal velocity in all n-pentanol concentrations, except for 1×10 −3 M (see Figure 9). For this specific case, especially for lower temperatures, the calculated u t values were slightly higher than those corresponding to the fully developed DAL. The points presented in Figure 11 were experimental data, while the lines were predictions of Equation (24), which described the u t vs. c dependence very accurately for all temperature ranges. As expected, the CMV values, calculated as a fitting parameter of Equation (24) and presented in Table 5, were practically identical for all temperature values. That meant that, despite the difference in absolute bubble velocity values, the concentration that caused the complete immobilization of the rising bubble surface (above which no further velocity decrease was noticed) was temperature-independent.
Minerals 2021, 11, x FOR PEER REVIEW 14 of 17 Figure 11A presents the ut values for bubble velocity at 200 mm. In the great majority of experiments, this distance was enough to establish terminal velocity in all n-pentanol concentrations, except for 1×10 −3 M (see Figure 9). For this specific case, especially for lower temperatures, the calculated ut values were slightly higher than those corresponding to the fully developed DAL. The points presented in Figure 11 were experimental data, while the lines were predictions of Equation (24), which described the ut vs. c dependence very accurately for all temperature ranges. As expected, the CMV values, calculated as a fitting parameter of Equation (24) and presented in Table 5, were practically identical for all temperature values. That meant that, despite the difference in absolute bubble velocity values, the concentration that caused the complete immobilization of the rising bubble surface (above which no further velocity decrease was noticed) was temperature-independent. By plotting the normalized bubble velocity ( − )/( − ) vs. c/CMV values, all experimental data taken for different temperatures were seen to converge in one universal curve, which indicated that the n-pentanol influenced the bubble rising velocity in a similar manner. It was the final evidence that the temperature, in this case, influenced only the kinetics of adsorption of the n-pentanol at the liquid/gas interface.
Conclusions
Experiments performed using two independent experimental methods, supported by numerical calculations and an analysis of the results, showed that, for a clean system, the temperature did not change the hydrodynamic boundary conditions at the rising bubble surface. Under various temperatures of pure water, the hydrodynamic boundary conditions of the bubbles of a given size remained unchanged and could be assumed to be By plotting the normalized bubble velocity (u t − u min )/(u w − u min ) vs. c/CMV values, all experimental data taken for different temperatures were seen to converge in one universal curve, which indicated that the n-pentanol influenced the bubble rising velocity in a similar manner. It was the final evidence that the temperature, in this case, influenced only the kinetics of adsorption of the n-pentanol at the liquid/gas interface.
Conclusions
Experiments performed using two independent experimental methods, supported by numerical calculations and an analysis of the results, showed that, for a clean system, the temperature did not change the hydrodynamic boundary conditions at the rising bubble surface. Under various temperatures of pure water, the hydrodynamic boundary conditions of the bubbles of a given size remained unchanged and could be assumed to be fully slip. An increase in the rising velocity was caused only by modifying the physicochemical parameters of the water (density, viscosity and surface tension). Concerning the bubble's diameter, an increase in the temperature from 5 to 45 • C caused only a slight size modification. In turn, the bubble deformation varied significantly: the deformation ratio increased with the water temperature and its value was accurately quantified using Legendre's equation.
It was shown, moreover, that the concentration values at minimum bubble velocity (CMV), calculated from experiments of a bubble rising in n-pentanol solutions of different concentrations, were practically identical for all temperatures. It meant that, despite the difference in the absolute bubble velocity, the concentration, causing the complete immobilization of the rising bubble surface (above which no further velocity decrease could be noticed) was temperature-independent. The temperature only influenced the timescale of the bubble surface immobilization. This observation confirmed the results presented by Zhang et al. [20], which associated this effect with an increase in diffusion kinetics of the surfactant molecules.
The results and analysis showed that the ultrasonic method of determining the rising velocity of a single bubble was reliable, yet not as accurate as a visual observation because the ultrasonic sensor depended on an arbitrarily chosen speed of sound in a liquid phase, which had to be used during calculations. Moreover, it did not allow for the determination of the bubble deformation ratio, which (according to the literature) is an important parameter for helping to quantify the dynamic adsorption layer formation stages. On the other hand, the ultrasonic method was significantly faster and gave a much better level of statistical confidence in a remarkably reduced time. In our opinion, it can be successfully used as a reliable tool for single bubble velocity measurements, especially in opaque or turbid solutions, where camera observations are difficult or impossible. Acknowledgments: Partial financial support from (NCN grant no) is acknowledged with gratitude.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
|
v3-fos-license
|
2020-10-07T14:17:04.465Z
|
2020-10-07T00:00:00.000
|
222145344
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/s13063-020-04654-y",
"pdf_hash": "7eb0d43d2c8c50eae0a29224f944845ecb0de043",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:745",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "7eb0d43d2c8c50eae0a29224f944845ecb0de043",
"year": 2020
}
|
pes2o/s2orc
|
Targeted hypothermia versus targeted normothermia after out-of-hospital cardiac arrest: a statistical analysis plan
Background To date, targeted temperature management (TTM) is the only neuroprotective intervention after resuscitation from cardiac arrest that is recommended by guidelines. The evidence on the effects of TTM is unclear. Methods/design The Targeted Hypothermia Versus Targeted Normothermia After Out-of-hospital Cardiac Arrest (TTM2) trial is an international, multicentre, parallel group, investigator-initiated, randomised, superiority trial in which TTM with a target temperature of 33 °C after cardiac arrest will be compared with a strategy to maintain normothermia and active treatment of fever (≥ 37.8 °C). Prognosticators, outcome assessors, the steering group, the trial coordinating team, and trial statisticians will be blinded to treatment allocation. The primary outcome will be all-cause mortality at 180 days after randomisation. We estimate a 55% mortality in the targeted normothermia group. To detect an absolute risk reduction of 7.5% with an alpha of 0.05 and 90% power, 1900 participants will be enrolled. The secondary neurological outcome will be poor functional outcome (modified Rankin scale 4–6) at 180 days after cardiac arrest. In this paper, a detailed statistical analysis plan is presented, including a comprehensive description of the statistical analyses, handling of missing data, and assessments of underlying statistical assumptions. Final analyses will be conducted independently by two qualified statisticians following the present plan. Discussion This SAP, which was prepared before completion of enrolment, should increase the validity of the TTM trial by mitigation of analysis-bias.
Background
The Targeted Hypothermia Versus Targeted Normothermia After Out-of-hospital Cardiac Arrest (TTM2 trial) is a continuation of the collaboration that resulted in the Target Temperature Management after out-ofhospital cardiac arrest trial (TTM trial) [1].
The TTM trial (NCT01020916) [1] was a multicentre, multinational, outcome assessor-blinded, parallel group, randomised clinical trial comparing two target temperature regimens of 33°C and 36°C in unconscious patients who had sustained return of spontaneous circulation after out-of-hospital cardiac arrest [1]. The trial did not demonstrate any significant difference in mortality rates or intact neurological survival between the two groups. Recently, the Therapeutic Hypothermia after Cardiac Arrest in Nonshockable Rhythm (HYPERION) trial was published [2]. This trial showed that among patients with coma who had been resuscitated from cardiac arrest with nonshockable rhythm, moderate therapeutic hypothermia at 33°C for 24 h compared with targeted normothermia led to a higher percentage of patients who survived with a favourable neurologic outcome at day 90 (P = 0.04) [2].
The TTM2 trial is an international, multicentre, parallel group, investigator-initiated, randomised, superiority trial in which TTM with a target temperature of 33°C after out-of-hospital cardiac arrest of a presumed cardiac or unknown cause will be compared with early treatment of fever (≥ 37.8°C).
This publication will describe the statistical analyses of the primary and secondary outcomes in the TTM2 trial.
Methods
The design of the TTM2 trial has been described in detail previously [3]. In short, the trial population will be adults (18 years of age or older) who experience a nontraumatic out-of-hospital cardiac arrest of a presumed cardiac or unknown cause with return of spontaneous circulation (ROSC). Patients will be eligible for enrolment if they meet all of the following inclusion criteria and none of the exclusion criteria. At certain sites, all TTM2 participants will also be enrolled in the TAME trial. We consider co-enrolment in TTM2 and TAME as an effective use of research resources. Adequate randomisation and a sample size as large as ours should lead to similar proportions of participants treated with targeted therapeutic mild hypercapnia in each of the TTM2 intervention groups. If there are no interactions between the TTM2 trial interventions and the TAME trial interventions, any beneficial or harmful effects of the TAME trial interventions will balance out. An interaction between the TTM trial interventions and the TAME trial interventions is not likely. Theoretically, the TTM2 trial interventions are believed to have neuroprotective effects including reductions in metabolic rate and pathologic cell signalling, while the TAME trial interventions are believed to affect cerebral blood flow. Furthermore, we have studied the interaction between PaCO2 and temperature in the TTM trial and there was no statistically significant interaction (P interaction = 0.95) [4]. If we show significant interactions, this will be handled as described under the 'Assessments of underlying statistical assumptions' section.
Randomisation and blinding
Randomisation will be performed by an investigator in the emergency department, in the angiography unit, or in the intensive care unit via web-based application using permuted blocks with varying block sizes, stratified by site and co-enrolment in the TAME trial (no coenrolment, TAME intervention arm 1, TAME intervention arm 2). Due to the nature of the intervention, the treating providers will not be blinded to the intervention. However, the outcome assessors, the prognosticators, the statisticians, the data managers, and the authors of the first version of the manuscript will be blinded to treatment allocation.
Trial interventions
The intervention period for both intervention groups will be 40 h and commence at the time of randomisation. Rapid cooling in the hypothermia group will be achieved by means of cold fluids and state-of-the-art cooling devices, i.e. intravascular/body-surface/nasal/ oesophageal cooling (physical cooling). A feedbackcontrolled system will be used to maintain the target temperature. In the normothermia arm, the aim will be early treatment of fever (≥ 37.8°C) using pharmacological measures and physical cooling when needed (up to 72 h). For participants who develop a temperature of 37.8°C (trigger), a device will be used and set at 37.5°C. All participants will be sedated, mechanically ventilated, and haemodynamically supported throughout the intervention period. Participants who are managed at 33°C will begin rewarming 28 h after randomisation.
Participants who remain unconscious will be assessed according to a conservative protocol based on the European Resuscitation Council (ERC)'s recommendations for neurological prognostication after cardiac arrest [3].
The main results of the trial will be published following the 6-month follow-up, results from the long-term follow-up and the outcome assessment of neurocognitive function will be presented separately [5].
Outcomes
The outcomes were defined as primary and secondary [3]. The sample size was based on the primary outcome and our primary conclusions will be based on the results of the primary outcome. We ranked the outcomes in our outcome hierarchy according to clinical relevance and estimated the power of each outcome to ensure that we had sufficient power to confirm or reject the anticipated intervention effects [6].
Primary outcome
All-cause mortality (dichotomous outcome)
Secondary outcomes
Proportion of participants with a poor functional outcome (modified Rankin scale 4-6) (dichotomous outcome) [7], we will in a secondary analysis analyse the ordinal modified Rankin scale data (ordinal data) Number of days alive after hospital discharge within 6 months after randomisation (count data) Health-related quality of life using EQ5D-5 L (VAS) [8] (continuous outcome) Time-to-death (survival data) for each participant from randomisation until 6 months after the last participant is randomised. If death has not occurred, participants will be censored at this point Dichotomous and continuous outcomes will be assessed at 30 days, 6 months, and 24 months after randomisation. For primary and secondary analyses, only the 6 months time point will be used.
Sample size and power estimations
Based on the results of the previous TTM trial [1] and information in the International Cardiac Arrest Registry (INTCAR), we anticipate a mortality of 55% in the normothermia group [9]. Using an absolute risk reduction of 7.5% as anticipated intervention effect, an acceptable risk of type I error of 5%, and an acceptable risk of type II error of 10%, a total of 1862 (931 participants in each group) participants are required. This anticipated intervention effect corresponds to a relative risk reduction (RRR) of 13.6% and a number needed to treat (NNT) of 14 [10,11]. Only 4/939 patients withdrew consent in TTM trial, and there were no missing data on mortality [1]. To allow for a possible loss to follow-up, we will recruit a total of 1900 participants.
We also estimated the statistical power of all secondary outcomes [6]. With an estimated sample size of 931 participants per group, the functional outcome measure (dichotomised mRS) has a power of 90% to detect a relative risk of 0.86 for a poor outcome (mRS 4-6) in 55% of cases in the control group. For the secondary outcome time-to-death, we estimate a power of > 90% based on the survival estimates mentioned above. We estimate a power of approximately 90% to detect a difference in 5 points on the EQ5D-5 L VAS-scale, based on a mean value of 70 in the control group and a standard deviation of 25 points [1,3]. For the secondary outcome 'days alive outside hospital', we estimate a power of approximately 83%, based on simulations [3].
General analysis principles
All analyses will be conducted according to the intention-to-treat principle (ITT), i.e. all randomised participants will be included in the analysis. A per protocol analysis will be performed if the number of participants in whom temperature management is withheld due to palliative care, early death or other reasons during the first six hours after randomisation exceeds 5% of the total trial population.
We will both assess if the thresholds for statistical significance and clinical significance are crossed (Bayes factor calculations will be reported in supplementary material) [12]. Assessment of clinical significance will be based on the anticipated intervention effects used in the sample size/power estimations [12]. Our primary conclusion will be based on the primary outcome, so all tests of statistical significance (including subgroup analyses) will be two-sided with a type I error risk of 5% [12].
It is generally acknowledged that regression analyses ought to be adjusted for the stratification variables used in the randomisation [13][14][15]. The TTM2 trial uses two stratification variables in the randomisation, i.e. 'site' and 'co-enrolment in the TAME trial' (no co-enrolment, TAME intervention arm 1, TAME intervention arm 2). We will primarily adjust all regression analyses for 'site' and 'co-enrolment in the TAME trial' to balance prognostic baseline characteristics across TTM2 trial intervention groups. We will also assess whether there are significant interactions between TTM2 trial interventions and the stratification variables (see the 'Assessments of underlying statistical assumptions' sections).
We will also perform the following subgroup analyses: sex (male compared to female), first presenting cardiac rhythm (shockable compared to non-shockable), presence of shock on admission (no shock on admission compared to shock on admission), age (at or above the median compared to below the median), and duration of cardiac arrest (at or above the median compared to below the median). We will present the results in forest plots.
Analysis of dichotomous data
Dichotomised outcomes will be presented as proportions of participants in each group with the event, as well as risk ratios with 95% confidence intervals. Dichotomous outcomes will be analysed using mixed effects generalised linear models using a log link function with 'site' as a random intercept using an exchangeable covariance matrix, and co-enrolment will be included as a fixed effect.
Analysis of continuous data
Continuous outcomes will be presented as means and standard deviations for each group along with 95% confidence interval for the means of the groups and the mean differences between the groups. Continuous outcomes will be analysed using mixed effects linear regression with 'site' as a random intercept using an exchangeable covariance matrix, and co-enrolment will be included as fixed effect. We expect that a large proportion of the participants will die before assessment of quality of life. When assessing health-related quality of life, we will therefore in the primary analysis impute a '0' for all participants who died or who are incapacitated and did not participate in the quality of life assessment.
In a secondary analysis of quality of life, we will only include survivors at 6 months.
Analysis of count data
Count data will be presented as means, mean differences, and 95% confidence intervals or medians, interquartile ranges, and 95% confidence intervals (bootstrapping) depending on the observed distribution. Count data will be analysed by the van Elteren test stratified by 'site' [16].
Analysis of survival data
Survival data will be presented as median survival time, frequencies, and percentages per group as well as hazard ratios with 95% CIs. Survival data will be analysed using Cox regression adjusted for site and co-enrolment. We plan to present Kaplan-Maier curves.
Handling of missing data
All randomised participants will be included in the primary analysis of all outcomes except in the primary analysis of health-related quality of life (please see the 'Analysis of continuous data' section). We anticipate that the proportion of missing values on primary and secondary outcomes will be less than 5%. However, we will in a secondary analysis consider using multiple imputation and present best-worst and worst best case scenarios if it is not valid to ignore missing data [17]. Best-worst and worst-best case scenarios assess the potential range of impact of the missing data for the trial results [17]. In the 'best-worst' case scenario, it is assumed that all patients lost to follow-up in the hypothermia group have had a beneficial outcome (have survived, had no poor functional outcome, and so forth), and all those with missing outcomes in the control group have had a harmful outcome (have not survived, have had poor functional outcome, and so forth) [17]. Conversely, in the 'worstbest' case scenario, it is assumed that all patients who were lost to follow-up in the experimental group have had a harmful outcome and that all those lost to follow-up in the control group have had a beneficial outcome [17]. When continuous outcomes are used, a 'beneficial outcome' will be defined as the group mean plus two SDs of the group mean (fixed imputation), and a 'harmful outcome' will be defined as the group mean minus two SDs of the group mean (fixed imputation) [17].
Assessments of underlying statistical assumptions
We will systematically assess underlying statistical assumptions for all statistical analyses [18,19]. For all regression analyses, both primary and secondary, we will test for major interactions between each covariate and the intervention variable. When assessing for major interactions, we will, in turn, include each possible first order interaction between included covariates and the intervention variable. For each combination, we will test if the interaction term is significant and assess the effect size. We will only consider that there is evidence of an interaction if the interaction is statistically significant after Bonferroni adjusted thresholds (0.05 divided by number of possible interactions (treatment variable interaction with 'site' and treatment variable interaction with 'co-enrolment in the TAME trial' = 0.025) and if the interaction shows a clinically important effect. If it is concluded that the interaction is significant, we will consider both presenting an analysis separately for each (e.g. for each site if there is significant interaction between the trial intervention and 'site') and an overall analysis including the interaction term in the model [18,19].
Assessments of underlying statistical assumptions for dichotomous outcomes
We will assess if the deviance divided by the degrees of freedom is significantly larger than 1 to assess for relevant overdispersion. Overdispersion is the presence of greater variability (statistical dispersion) in a data set than would be expected based on a given statistical model, and this case considered using a maximum likelihood estimate of the dispersion parameter. To avoid analytical problems with either zero events or problems with all participants dying at a given site, we have only included sites planning to randomise a sufficient number of participants. However, we cannot exclude the risk that some sites might have problems with recruitment. We will, by checking if the number of participants is larger than 10 (rule of thumb) per site, consider pooling the data from small sites if the number of participants is too low [19].
Assessments of underlying statistical assumptions for linear regression
We will visually inspect quantile-quantile plots of the residuals [20,21] to assess if the residuals are normally distributed and use residuals plotted against covariates and fitted values [20,21] to assess for homogeneity of variances. If the plots show deviations from the model assumptions, we will consider transforming the outcome, e.g. using log transformation or square root and/ or use robust standard errors [19][20][21].
Assessments of underlying statistical assumptions for Cox regression
We will visually inspect log-log plots stratified by treatment and adjusted for the effects of all covariates (continuous and categorical) [20,22] to asses if the assumption of proportional hazards between the compared intervention groups is fulfilled. If the assumption of proportional hazards seems violated, we will consider using a non-parametric test (e.g. log rank test) or split the observation period into two (or more) separate observation periods [19].
Statistical reports
Blinded data on all outcomes will be analysed by two independent statisticians [19]. Two independent statistical reports will be sent to the chief principal investigator and will be shared with the steering group and author group, and if there are discrepancies between the two primary statistical reports, then possible reasons for that will be identified and the steering group will decide which is the most correct result. A final statistical report will be prepared, and all three statistical reports will be published as supplementary material [19].
Mock tables are presented in Mock Tables TTM2.
Discussion
The primary aim of this present publication is to minimise the risks of outcome reporting bias and erroneous data-driven results. We therefore present a pre-defined description of the statistical analysis plan for the TTM2 trial.
Strengths
Our methodology has several strengths as it is predefined and we have limited problems with multiplicity because we only assess one primary outcome and our conclusions will primarily be based on the results of the primary outcome [12]. Our chosen outcomes are all patient-centred. Our primary outcome, all-cause mortality, remains perhaps the most reliable and patientcentred outcome and we assess all-cause mortality as a dichotomous outcome at one time point, which simplifies both the statistical methodology and the clinical interpretability, i.e. it is intuitively easy to assess whether a shown difference (effect size) is clinically important when comparing two proportions at one time point. We will analyse data in accordance to the intention-to-treat principle and, if necessary, use multiple imputation and best-worst/worst-best case scenarios to assess the potential impact of the missing data on the results [17]. Furthermore, we plan to systematically assess whether underlying statistical assumptions are fulfilled for all statistical analyses.
Limitations
A potential limitation of the TTM2 trial are the potential heterogeneous intervention effects depending on the mode of cooling at different clinical sites, and the potential biased impact on the trial results if a large proportion of the randomised participants withdraw consent after regaining capacity. Another potential limitation is the planned co-enrolment with the TAME Trial; our results will be difficult to interpret if there are significant interactions between the TTM2 and TAME trial interventions. As mentioned (see the 'Co-enrolment with the TAME trial' section), we have studied the interaction between PaCO 2 and temperature in the TTM trial and found no statistically significant interaction (P interaction = 0.95) [4], and if we show significant interactions, this will be handled (see the 'Assessments of underlying statistical assumptions' section). Co-enrolment with the TAME trial also made it possible to increase the planned sample size from 1200 to 1900 participants. We only assess one primary outcome and our primary conclusions will be based on the result of the primary outcome, but we assess several secondary outcomes, exploratory outcomes, and subgroup analyses which increase the risks of type I errors. It is a limitation that we do not adjust our thresholds for significance according to the number of outcome comparisons. Furthermore, our anticipated intervention effects used in the sample size estimation and the power estimations for the secondary outcomes are not based on previous valid studies because we have not identified such studies. We have pragmatically chosen these anticipated intervention effects based on clinical judgement and previous trial results [1,2]. This increased risk of type I errors and the uncertainty regarding the anticipated intervention effects need to be considered when interpreting our trial results.
Conclusion
We present a pre-defined description of the statistical analysis for the TTM2 trial. The risks of outcome reporting bias and erroneous data-driven results will be minimised if this statistical analysis plan is followed.
|
v3-fos-license
|
2019-09-16T03:22:36.823Z
|
2019-06-18T00:00:00.000
|
208413762
|
{
"extfieldsofstudy": [
"Medicine",
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.researchprotocols.org/2020/1/e14588/PDF",
"pdf_hash": "a8fbe42d3c02fc98ad7b689137b69f5c71e5baf6",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:750",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b552ef4adbf21f6dfa466c5524f1cd2eefbe79fb",
"year": 2020
}
|
pes2o/s2orc
|
Using Mobile Devices to Deliver Lifestyle Interventions Targeting At-Risk High School Students: Protocol for a Participatory Design Study
Background: Unhealthy lifestyle behaviors such as insufficient physical activity, unhealthy diet, smoking, and harmful use of alcohol tend to cluster (ie, individuals may be at risk from more than one lifestyle behavior that can be established in early childhood and adolescence and track into adulthood). Previous research has underlined the potential of lifestyle interventions delivered via mobile phones. However, there is a need for deepened knowledge on how to design mobile health (mHealth) interventions taking end user views into consideration in order to optimize the overall usability of such interventions. Adolescents are early adopters of technology and frequent users of mobile phones, yet research on interventions that use mobile devices to deliver multiple lifestyle behavior changes targeting at-risk high school students is lacking. Objective: This protocol describes a participatory design study with the aim of developing an mHealth lifestyle behavior intervention to promote healthy lifestyles among high school students. Methods: Through an iterative process using participatory design, user requirements are investigated in terms of technical features and content. The procedures around the design and development of the intervention, including heuristic evaluations, focus group interviews, and usability tests, are described. Results: Recruitment started in May 2019. Data collection, analysis, and scientific reporting from heuristic evaluations and usability tests are expected to be completed in November 2019. Focus group interviews were being undertaken with high school students from October through December, and full results are expected to be published in Spring 2020. A planned clinical trial will commence in Summer 2020. The study was funded by a grant from the Swedish Research Council for Health, Working Life, and Welfare. Conclusions: The study is expected to add knowledge on how to design an mHealth intervention taking end users’ views into consideration in order to develop a novel, evidence-based, low-cost, and scalable intervention that high school students want to use in order to achieve a healthier lifestyle. International Registered Report Identifier (IRRID): DERR1-10.2196/14588 (JMIR Res Protoc 2020;9(1):e14588) doi: 10.2196/14588
Health Habits Among Youths and the Need for Scalable Interventions
The lifestyles of young people affect not only their current health but also their risk of a number of noncommunicable diseases (NCDs) such as cardiovascular diseases, cancers, chronic respiratory diseases, and diabetes. Insufficient physical activity, unhealthy diet, smoking, and harmful use of alcohol are all modifiable behaviors that increase the risk of NCDs [1][2][3]. Swedish national surveys have revealed that the majority of young people in Sweden do not consume the recommended daily amount of fruits and vegetables nor do they meet the recommended physical activity guidelines [4][5][6]. Also, smoking remains a global public health issue, and there is a high prevalence of smoking in youth [7] that also applies to Sweden [6]. Alcohol consumption has declined, but heavy episodic drinking continues to be a problem among alcohol-drinking adolescents [4,8]. Clearly, effective and evidence-based interventions to promote healthier lifestyles in adolescents are warranted.
Adolescence is characterized by rapid physical and psychological changes, together with increasing demands and influences of peers, school, and wider society. It is well documented that behaviors developed during this period influence health in adulthood. Adolescence is the peak period for initiation of substance use, which creates large health burdens in this age group [9,10]. As unhealthy lifestyle behaviors tend to be established in early childhood and adolescence and track into adulthood [11][12][13], efforts for outreach to high school students are vital. The prevention of diseases related to modifiable behavior has been emphasized as a key component of adolescent health [14]. Part of the success to reduce NCDs requires helping individuals to change their lifestyles to promote health [1,3,10]. According to the World Health Organization (WHO), the education sector can play an important role in health promotion for youths [15]. School health systems, with qualified professionals such as school nurses, welfare officers, and health educators, provide services for students that promote optimum health for their academic success. School multidisciplinary teams provide good accessibility for adolescents and a natural setting for attempting to endorse healthy lifestyle behaviors for as many adolescents as possible [16]. However, to be delivered by school health professionals, interventions require minimal resources and time.
Mobile Health Interventions to Promote a Healthier Lifestyle Among Youths
Over the past decade, interest has increased in providing lifestyle interventions via mobile phones, often referred to as mobile health (mHealth) interventions. mHealth is defined by WHO as a medical or public health practice that is supported by mobile devices [15]. Major advantages with mHealth interventions are that they require fewer resources than traditional face-to-face interventions and they can be delivered at any time. To date, most mHealth interventions have focused on improving one or two lifestyle behaviors such as nutrition and/or physical activity or smoking cessation. However, there is also evidence that lifestyle behaviors may cluster (ie, individuals may be at risk from more than one lifestyle behavior) [17][18][19]. Interventions targeting multiple lifestyle behaviors at the same time may be beneficial for improving general lifestyle among adults [19,20] and may be more effective and efficient than those targeting a single behavior [21].
To date, although young people are early adopters of technology and frequent users of mobile phones, studies on interventions that use mobile devices to deliver multiple lifestyle interventions to high school students are few, and most interventions target only one or two single behaviors [22]. In this context, it is also relevant to note that a meta-analysis [23] examined the effectiveness of text message-based interventions for tobacco and alcohol cessation within a young adult population. Only 5 of the 14 studies reported significant differences between groups of substance use behavior outcomes. The authors concluded that the included randomized controlled trials (RCTs) lacked detail regarding intervention content. Consequently, replication of the RCTs and the possibility of identifying why and how previous interventions in youth were effective are difficult.
Formative Research Processes
A neglected area of research is the documentation and critical analysis of the formative research processes required in the development and refinement of effective mHealth interventions [24]. A systematic review stressed the need for further research to evaluate the efficacy and effectiveness of intervention approaches in promoting preventive behavior among adolescents [25]. A more recent systematic review emphasized the urgent need to examine development processes for mHealth interventions. The authors concluded that it is important to fully understand how interventions have been developed to allow replication and adaptation of interventions across settings [26].
The prompt expansion of device capability presents many challenges for developers of mHealth interventions, especially when designing interventions that aim to affect multiple individual lifestyle behaviors [27]. As described in Bock et al [27], one set of challenges concerns the structure, content, and tone of the intervention. Previous research has pointed out that the most important factors during the design process are to be flexible and responsive to the input and feedback of the target audience: if they do not enjoy the program they may disengage [28]. A systematic review called for greater transparency in use of theory in developing mHealth interventions [29]. An additional challenge is that of technological cultural consistency (ie, to ensure the developed interventions and modes of access are compatible with the ways in which the intended target group uses technology) [27]. Given the identified challenges and needs regarding development of mHealth interventions, research is needed on how best to design mHealth interventions taking end user assessments into account [28].
Aim
The aim of this research protocol is to describe the research process in developing a novel mHealth intervention to change risky lifestyle behaviors among high school students (LIFE4YOUth).
LIFE4YOUth is one of seven mHealth interventions in a research program (funded by Forte 2018-01410; principal investigator: ML) aiming to promote healthy eating, physical activity, smoking cessation, and nonrisky drinking in seven different populations in the health care system [30]. All included studies will follow a harmonized procedure for intervention development, and hence a secondary aim of this protocol is to describe the formative work for LIFE4YOUth as a framework for the included interventions in the research program.
Study Overview
The development of the LIFE4YOUth intervention is based on a review of the literature and will be inspired by the same phases of development and evaluation as any intervention provided by the National Institutes of Health [31] and further developed and described as recommended by Abroms et al [28] in their guide based on collective experiences in designing, developing, and evaluating mHealth interventions ( Figure 1). The recommended steps for developing mHealth interventions include (1) conduct research for insight into target audience and target health behavior, (2) design the intervention, (3) pretest the intervention, and (4) revise the intervention [28]. This paper will focus on step 2 (designing the intervention) and step 3 (pretesting the intervention). The intervention will be designed and pretested during an iterative process, involving multiple rounds of feedback.
Preparations: Development of a Preliminary Version of LIFE4YOUth
The structure of a preliminary version of LIFE4YOUth was developed in early 2019. The intervention aims to target physical activity, diet, alcohol consumption, and smoking by giving high school students access to a mobile phone app to promote a healthy lifestyle. The structure and content are based on current best practices gathered from scientific literature on lifestyle interventions and behavior change and are inspired by fundamental theoretical constructs such as behavior change theories and psychological models [32,33]. The technical platform is based on our previous research in developing mHealth interventions [34][35][36].
Participatory Design Processes
The participatory design process used in this study will include three activities: (1) heuristic evaluation, (2) focus group interviews, and (3) usability tests. We will invite end users including high school students and university students and employees at Linköping University. The knowledge, experiences, ideas, and skills of the participants will be used to revise the intervention. Figure 2 presents the activities included in the design of the intervention.
Heuristic Evaluation
Effectiveness, as part of usability defined by the International Organization for Standardization standard 9421-11:2018, will be investigated using heuristic evaluation [37], a usability inspection method. During heuristic evaluation, trained evaluators review an intervention to find usability problems, assign them to a specific category of heuristics, and ascribe a severity rating in order to provide distinct usability information. The experts recruited for this evaluation are not necessarily usability experts but should have some level of expertise with the subject matter or technology required to use the investigated app. The heuristics will identify usability issues such as problems with unclear functions, confusing navigation, and consistency issues [38][39][40].
Focus Group Interviews
Initially, a series of focus groups will be conducted to enable the collection and analysis of three complementary forms of data: individual data, group-level data, and data generated from participant interactions [41,42]. Focus groups are semistructured discussions with research participants that aim to explore a specific set of issues [43]. The focus groups will be designed to elicit feedback on content, understandability, and acceptability of the proposed intervention so that modifications can be made.
Usability Tests
Usability tests [44][45][46] will be completed in order to further modify and improve the intervention. Usability tests consist of a human-computer interaction and refer to evaluating an intervention by testing it with potential end users with the goal of identifying understandability, learnability, and attractiveness and determining participant satisfaction with the intervention. The usability tests will provide information on whether participants are able to complete specified tasks successfully, identify how long it take to complete tasks, and identify changes required to improve user performance and satisfaction [44,47].
Heuristic Evaluation
Participants for the heuristic evaluation will be recruited by members of the research team through paper advertising (posters) in public areas at Linköping University. Participants will register their interest by contacting the research leader by email.
Focus Groups and Usability Tests
School staff at five high schools selected for convenience in Östergötland (Sweden) will be contacted via email and informed about the research project. Approximately 1000 students, both female and male, aged 15 to 18 years, attend these high schools and will all be invited to take part in the focus group interviews and usability tests. High school students at the selected schools are expected to be similar to the overall target population for mHealth interventions.
Participants among high school students will be recruited by the school staff through paper advertising (posters and leaflets), digital advertising (student email and school website), and information in the classrooms. High school students will register their interest by contacting the research leader by email or telephone or by contacting school staff who will send students the telephone number of the research leader.
Selection Criteria
Inclusion criteria for the heuristic evaluation will include university students and employees at the Faculty of Medicine and Health Sciences at Linköping University who are willing to participate and who own a mobile phone. Inclusion criteria for the focus group interviews and usability tests will include high school students aged 15 to 18 years at selected high schools in Östergötland who are willing to participate and who own a mobile phone. A total of 32 to 44 participants are expected. Exclusion criteria for the focus group interviews and usability tests will be high school students who are not Swedish-speaking or do not own a mobile phone.
Heuristic Evaluations
A total of 15 heuristic evaluators, both students and employees at Linköping University, will be recruited. A research assistant will give participants a short (eg, 45 minutes) training session to instruct them on the main principles of heuristic evaluation during a meeting with all participants. The introduction will take place in a conference room at Linköping University. For the heuristic evaluation, a set of 10 standardized heuristics published by Nielsen [37] will be used. The heuristics for usability evaluation according to Nielsen are listed in Textbox 1. [37] Participants will be taught how to use the heuristics to evaluate the intervention. All participants will be sent a link with a prototype of the intervention. Each evaluator will go through the prototype one time and independently identify issues tied to a specific heuristic (eg, visibility of system status, plain language, flexibility and efficiency of use, aesthetic design) and give them a severity rating [38][39][40]. The evaluation will be performed wherever the participants prefer and sent back to the research assistant in a prepaid envelope within 1 week of receipt. Heuristic evaluations will be gathered in May 2019. Textbox 1. Heuristics for usability evaluation according to Nielsen.
•
Visibility of system status: system should always keep users informed about what is going on through appropriate feedback within reasonable time.
• Match between system and the real world: system should speak the users' language with words, phrases, and concepts familiar to the user rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
• User control and freedom: users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialog. Support undo and redo.
• Consistency and standards: users should not have to wonder whether different words, situations, or actions mean the same thing.
• Error prevention: even better than good error messaging is a careful design that prevents problems from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
• Recognition rather than recall: minimize users' memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialog to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
• Flexibility and efficiency of use: accelerators, unseen by the novice user, may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
• Aesthetic and minimalist design: dialogs should not contain irrelevant or rarely needed information. Every extra unit of information in a dialog competes with relevant units of information and diminishes their relative visibility.
• Help users recognize, diagnose, and recover from errors: error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
• Help and documentation: even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, be focused on the user's task, list concrete steps to be completed, and not be too large.
Focus Group Interviews
A total of 4 focus group interviews with 3 to 6 participants in each group (12 to 24 participants) will be conducted between May and June 2019. Teachers will not be present during the interviews. The consolidated criteria for reporting qualitative research (COREQ) 32-item checklist [48] will be applied to give an explicit and comprehensive structure to the focus group interviews according to the following domains: • Research team and reflexivity: a female researcher with a PhD degree and training and experience in qualitative methodology (UM) will be responsible for conducting the focus groups. A female observer (AS) will ask complementary questions at the end of the interview. The interviewer will explain the purpose of the interview and her interests in doing the research.
• Study design: an explorative qualitative approach [41,42] will be used for methodological orientation. All focus group interviews will be conducted in a high school setting. Each semistructured interview will be audiorecorded and will last approximately 1.5 hours. An interview guide (Multimedia Appendix 1) will be used [41]. Interview questions will be framed around the following domains: (1) making lifestyle changes, (2) use of the mobile phone for health informatics, (3) intervention content, (4) overall feedback, and (5) visual prototype. After discussing the first four domains, UM will present a low-fidelity paper-based prototype [49] including a series of printouts of the LIFE4YOUth program.
• Analysis and findings: analyses will be performed using systematic thematic analyses (further described under Data Analysis). Data will be coded by two researchers. Themes will not be identified in advance but will derive from the data. Quotations will be used to illustrate the themes and elucidate the findings.
Usability Tests
A total of 5 usability tests [45][46][47] will be completed. Five high school students will go through a 60-minute session during which all interactions with the intervention are videorecorded. A high-fidelity prototype [49], including the actual software start page, menu page, and 4 intervention modules (alcohol, smoking, physical activity, and diet) will be used. During this high-fidelity prototype testing, participants will go through the entire intervention module. A research assistant will ask participants to complete tasks while explaining their actions using a think aloud method [50,51]. An observer (AS) will note potential issues as the given tasks are performed by the participants. The assistant will not offer any help during the task execution to minimize any disruptions of spontaneous thoughts as well as to avoid bias in the results. After completing the session, the participants will be asked to complete a paper version of the system usability scale (SUS). The SUS is a standardized tool to get a global view of the participants' subjective assessments of usability based on 10 questions [52].
The tests will be run on an iPhone and take place in June 2019 in a medical informatics lab room.
Heuristic Evaluation
All issues from evaluators will be pooled with potential duplicates merged and issues with high average severity ratings rectified [37,53]. Descriptive statistics will be used to summarize heuristic violations and associated severity scores.
Focus Group Interviews
Transcripts will be analyzed thematically in an iterative process of coding [54]. Analyses will focus on end user experiences and opinions regarding making lifestyle changes using the mobile device as a health tool, as well as on content, structure, and implementation of LIFE4YOUth. Systematic thematic analyses will follow a prescribed, sequential process: (1) noting overall impressions, (2) reducing and coding into themes, (3) searching for patterns and interconnections, (4) mapping and building themes, and (5) drawing conclusions. In order to ensure validity of the results and prevent bias in the qualitative analysis process, data will be independently coded by two researchers with a consensus reached by adjudication [41].
Usability Tests
After all user tests have been completed, observers and other members of the research group will discuss whether specific tasks stood out or hindered the progress in development of the program. Analysis of the videorecordings will be informed inspired by inductive program theory development [55].
Analysis will focus on features of the intervention related to design, format, instructions, navigation, terminology, and learnability that need to be redesigned. Descriptive statistics will be used to analyze problem counts and time taken. Average scores from the SUS will be used to identify average satisfaction [52].
Ethics Approval and Consent to Participate
The study has been approved by the Swedish Ethical Review Authority (Dnr 2019-01320). All participants will give written informed consent prior to participation in any study procedure (focus group interview, heuristic evaluation, user test).
Results
Recruitment started in May 2019. Data collection, analysis, and scientific reporting are expected to be completed in December 2019. The study was funded by a grant from the Swedish Research Council for Health, Working Life, and Welfare. Focus group interviews were being undertaken with high school students from October through December, and full results are expected to be published in Spring 2020. A planned clinical trial will commence in Summer 2020.
Discussion
As a growing body of research suggests that health risk behaviors often do not occur in isolation, this study considers interventions that address lifestyle behaviors related to diet, physical activity, smoking, and alcohol. Also, more research is needed into the documentation and critical analysis of the formative research processes required in the development and refinement of effective mHealth interventions [24,26]. This protocol describes a participatory design study with the aim of developing an mHealth intervention to promote healthy lifestyles among high school students that can be delivered via school health staff. This protocol provides a scientific record of the methodologies used when developing the intervention in order to enhance transparency of research. Additionally, as described above, the LIFE4YOUth intervention program is part of a larger research program (funded by Forte, the Swedish funding agency for health and social affairs research) [28], and the formative work presented here will also be used as a framework for the other trials in the program.
Through formative research and participatory design, we believe this study will result in deepened knowledge regarding what aspects of content and structure end users (eg, high school students) consider important for designing mHealth lifestyle behavior interventions. More specifically, the study is expected to give answers as to whether an mHealth intervention that gives access to interactive and personal modules contained within a mobile phone-based dashboard is useful and accepted among high school students. This knowledge is valuable in order to guide further development of a final version of the novel mHealth intervention program LIFE4YOUth targeting high school students. An RCT will be conducted to determine the efficacy of the intervention. If found effective in the RCT, the program has the potential to be implemented nationally through school health services.
|
v3-fos-license
|
2019-03-18T14:05:21.419Z
|
2018-12-01T00:00:00.000
|
81007537
|
{
"extfieldsofstudy": [
"Medicine",
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1135/1/012089/pdf",
"pdf_hash": "cf66181df2a77a28e1115519f6bd51f8a93a56ca",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:751",
"s2fieldsofstudy": [
"Engineering",
"Medicine",
"Biology"
],
"sha1": "1265cd6832d522738863fb9c4dd6d724677b2faf",
"year": 2018
}
|
pes2o/s2orc
|
Structure of the secondary flow in the bifurcation of a blood vessel: patient-specific modeling and clinical Doppler measurements
The present contribution is aimed at patient-specific and clinical study of the secondary flow in the bifurcation of a blood vessel. Flow visualization is performed both with the ultrasound color Doppler imaging mode and with CFD data postprocessing of the flow in a carotid artery model with narrowing (stenosis). Special attention is paid to obtaining data for the secondary motion in the internal carotid artery. There was a good agreement in the results obtained between the patient-specific modeling and clinical measurements.
Introduction
The advancements in numerical simulation in recent years has led to aid the investigation of cardiovascular diseases. These studies shall be helpful to clinicians or physiologists to understand the mechanical environments in normal and diseased arteries. Especially, the flow in regions, such as bifurcation or arterial curvature is quite complex and more prone to development of atherosclerosis. The flow behavior through healthy artery is quite different in contrast to the stenosed artery with elevated stresses and high resistance to flow. The study of such important physiological simulation of flow through stenosis has profound implications for the diagnosis and treatment of vascular disease. The much required observation of flow behavior in critical areas such as bifurcation, carotid bulb, flow separation or turbulence in realistic anatomical models is possible only through patient-specific flow modeling [1,2]. A reliable flow simulation requires the realistic 3D vascular geometric model and unsteady flow boundary conditions. The geometry data is obtained through in-vivo measurements such as MRI slides, ultrasound and angiogram data such as CT, DSA and x-ray [3,4]. There is a good agreement in the results obtained between the numerical simulation and phantom experiments [5]. However, studies aimed at validating patient-specific models by means of comparisons with clinical measurements of the blood flow structure are practically not encountered. Hence, in the present study hemodynamics is studied in patient-specific model considering a case study of patient diagnosed with partial narrowing (stenosis) of internal carotid artery. The aim of this work is to compare vortex structures of blood flow, measured clinically by ultrasound Doppler and calculated numerically for the patient-specific model of artery bifurcation with stenosis.
Image processing
In the present study, patient is taken up whose left carotid system is normal and right common carotid artery is also normal with partial narrowing of internal carotid artery. External carotid artery appears to be normal. The partially stenosed internal carotid artery is shown in Fig. 1.
The geometry of the carotid model was constructed in several stages using the ICEM CFD software that is a part of the ANSYS Workbench platform. First, angio data images of the carotid bifurcation in two mutually perpendicular planes (Fig. 1a) were digitally segmented into several arcs. Equally spaced points were created on the arcs and then two corresponding points from each arc in planes were unified in one point in space. Then smooth space curve was drawn through the points using the 3D-Spline tool. This curve served as an axis for the carotid model. As the final stage, a cylindrical surface simulating the inner wall of the artery was constructed (Fig. 1b).
An angiographic study of the patient showed that the carotid bifurcation under consideration has a spatial curvature, an atherosclerotic plaque is located on the internal carotid in an asymmetric manner, covering the vessel by 64% over the diameter and 90% over the diameter, which corresponds to the case of severe stenosis. a b Figure 1. Angiograms of stenosed carotid bifurcation in two mutually perpendicular planes (a); 3D stenosed carotid bifurcation (b)
Ultrasound visualization and quantitative evaluation of swirling blood flow in carotid bifurcation
Swirling blood flow registration and estimation method using the ultrasound Doppler technique was developed and applied by the authors [6]. With the help of this method the authors measured axial and circumferential components of blood velocity for stenosed carotid artery in the instance of maximum flow rate. The axial velocity component was measured in color Duplex imaging mode by registering impulse-wave Doppler spectra in artery longitudinal section using a traditional technique (Fig. 2a). The circumferential velocity component evaluation was carried out in the color Duplex imaging mode by registering impulse-wave Doppler spectra in an artery cross section, with a sample volume placed in turns into the lateral and medial hemicircles of the artery lumen (Fig. 2b). The sample volume size corresponded to vessel radius, and the angle between the blood flow direction and the ultrasound beam was set to 0°. Circumferential velocity of the blood flow was measured for each position of the sample
Mathematical model and computational aspects Governing Equations
The numerical simulation of the 3D pulsatile flow in the model of the carotid bifurcation with 90% stenosis was carried out. The arterial wall is assumed to be rigid. Actual flow past the stenosis was in transition from laminar to turbulent for Reynolds numbers exceeding a certain critical value.
Computations based on the Reynolds-averaged Navier-Stokes equations were made taking into account the results of our clinical measurements showing that there were intense velocity pulsations past the stenosis. Governing equations are the continuity and Reynolds-Averaged Navier-Stokes equations (RANS): A widely used k-ω SST turbulence model was chosen to close the problem formulation.
Boundary Conditions and Numerical Procedure
A swirl velocity profile and a variation in the mean flow velocity during the cycle were specified at the inlet boundary (Fig. 3). The mean velocity curve was obtained from the clinical measurements of blood flow in patient by ultrasound Doppler method. The cycle period is 1 s. The velocity increase phase makes up 15% of the total cycle time. The maximum mean flow velocity for the period is 0.7 m/s. The ratio of the maximum circumferential velocity to the maximum axial velocity of the for inlet swirl velocity profile is 0.3. A constant pressure was specified at the outlet of external carotid artery and the mean velocity curve was specified at the outlet of internal carotid artery. Even though blood flow is non-Newtonian physiologically, however in the present study, since the focus in on large arteries, Newtonian assumption is acceptable as relatively high shear rate occurs. The dynamic viscosity coefficient is 0.004 Pa s, the density is 1000 kg/m 3 . The Reynolds number at the maximum flow rate, based on the inner vessel diameter and the velocity of the mean fluid flow, is 1050.
Geometry and grids were built in ICEM CFD v.16. The finite-volume method was employed to solve RANS equations along with SST turbulence model. Computational domain was discretized into 'mixed' mesh with both structured zones with hexahedron cells and unstructured zones with tetrahedron cells. Total amount of cells is approximately 2,600,000. The simulation of carotid artery is carried out for 3 pulse cycle and results in the last cycle is considered for the investigation. All calculations were performed with the ANSYS CFX v.16 software.
Results
The secondary flow evolution in the patient-specific model of the stenosed carotid bifurcation is illustrated by Fig. 4, showing tangential velocity field in cross sections along common and inner carotid arteries in the instance of maximum flow rate. Swirling flow is observed in common carotid artery. The computations revealed that Dean vortex pairs in which the fluid rotates in opposite directions form at the ostium of the internal carotid artery. The Dean vortices have the form of two symmetric structures elongated along the outer wall of the inner carotid artery. In front of stenosis flow with Dean vortices is transformed in a converging flow. Immediately downstream the stenosis two asymmetric vortices arise; one of them significantly exceeds the other in size and intensity, and generates a swirling flow. Fig. 6 shows the numerical and clinical variation of maximum axial and circumferential Doppler velocity components along the common and internal carotid arteries. The axial Doppler velocity variation is characterized by local maximum at the stenosis of the internal carotid artery. The circumferential Doppler velocity variation has maximum on distance nearly 6 mm downstream of stenosis in section where swirl forms; downstream swirl decreases. Discordance of the results does not exceed 20% for axial velocity and 30% for circumferential Doppler velocity. a b Figure 6. Variation of maximum axial (a) and circumferential (b) Doppler variation along the common and stenosed internal carotid arteries in the instance of maximum flow rate
Conclusions
Using a clinical data with ultrasound measurements allows validating patient-specific model of artery bifurcation with stenosis. The computations revealed that Dean vortex pairs form at the ostium of the internal carotid artery. In front of stenosis flow with Dean vortices is transformed in a converging flow. Immediately downstream the stenosis two asymmetric vortices arise; one of them significantly exceeds the other in size and intensity, and generates a swirling flow. Clinical and numerical results provide a qualitative agreement of secondary flow structures. The measurement error does not exceed 20% for axial velocity component and 30% for circumferential velocity component.
|
v3-fos-license
|
2022-04-30T06:24:42.446Z
|
2022-04-28T00:00:00.000
|
248432235
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "f4e50557be22e42db4e87b09231ca9bab047aefc",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:752",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "b94e4a1474f15f595b88a0c388e12ac23930e01d",
"year": 2022
}
|
pes2o/s2orc
|
Proof of concept for the use of trained sniffer dogs to detect osteosarcoma
Sarcomas are mesenchymal cancers which often show an aggressive behavior and patient survival largely depends on an early detection. In last years, much attention has been given to the fact that cancer patients release specific odorous volatile organic compounds (VOCs) that can be efficiently detected by properly trained sniffer dogs. Here, we have evaluated for the first time the ability of sniffer dogs (n = 2) to detect osteosarcoma cell cultures and patient samples. One of the two dogs was successfully trained to discriminate osteosarcoma patient-derived primary cells from mesenchymal stem/stromal cells (MSCs) obtained from healthy individuals. After the training phase, the dog was able to detect osteosarcoma specific odor cues in a different panel of 6 osteosarcoma cell lines with sensitivity and specificity rates between 95 and 100%. Moreover, the same VOCs were also detected by the sniffer dog in saliva samples from osteosarcoma patients (n = 2) and discriminated from samples from healthy individuals with a similar efficacy. Altogether, these results indicate that there are common odor profiles shared by cultures of osteosarcoma cells and body fluid samples from patients and provide a first proof of concept about the potential of canine odor detection as a non-invasive screening method to detect osteosarcomas.
Osteosarcoma is the most common type of primary solid tumor arising from bone tissue 1 . Although it has a relatively low overall incidence (0.3 per 100.000 per year), this type of tumors represent approximately 15% of pediatric tumors 1,2 . As other types of sarcoma, osteosarcomas arise upon the malignant transformation of mesenchymal stem/stromal cells (MSCs) or their derived cell types along the osteoblastic lineage [3][4][5] . Conventional osteosarcoma, the most common subtype, is always high-grade and is frequently metastatic at the time of diagnosis 6 . Current standard of care, based on an accurate surgery accompanied by chemotherapy, remains largely unaltered for decades and patients with metastatic disease still face dismal 5-year overall survival rates below 20% 2,6 . Therefore, as in other tumor types, an early diagnosis is key to improve the prognosis of osteosarcoma patients. In this regard, already established populations screening methods, such as those available for the early detection of breast, colon or prostate cancer patients, have been successful in improving patients' survival [7][8][9] . Nevertheless, these screening programs involve invasive and/or costly methodologies and they are not available for most tumor types, including osteosarcomas.
In order to develop novel non-invasive detection techniques, much attention is being given to the fact that individuals with cancer may release specific volatile organic compounds (VOCs) 10 . These odorous chemicals with low molecular weight can be detected, both, in body-derived non-tumor samples (blood, urine, stool, exhaled breath, etc.) and tumor samples 10 . This cancer-associated "volatilome" profile is the result of specific metabolic changes induced by tumor cells and its detection may provide a fully noninvasive diagnostic and/or prognostic biomarker 11 .
Identification of VOCs in a gaseous mixture can be done by chemical analytical techniques such as gas chromatography linked to mass spectrometry (GC/MS) or by using sensor arrays or "electronic nose" (eNose) to create specific smellprints or VOC profiles 10,12,13 . In addition to these laboratory technologies, the complex www.nature.com/scientificreports/ olfactory system of dogs has proven its ability to detect VOCs in concentrations of parts per trillion. Indeed, for many compounds, dogs have shown a limit of detection which is lower than the most sensitive mass spectrometry or eNose systems 13 . Thus, apart from being long used for many civilian, military and forensic applications, trained sniffer dogs have also demonstrated their ability to discriminate cancer-associated VOCs in body fluids and tumor samples from patients with non-small cell lung cancer, breast cancer, prostate cancer, colorectal carcinoma, melanoma, or ovarian cancer 14,15 . In order to achieve the most reliable results, the implementation of standardized methods for sample handling and dog training are essential 16 . In this regard, cell lines may provide a convenient source of specimens presenting low sample-to-sample variability and absence of patientspecific confounding odors (stress hormones, medications, etc.). Therefore, the use of cell lines may facilitate training and pilot testing experiments to validate initial hypotheses regarding the suitability of canine scent to discriminate cancer patients 17 .
The ability of sniffer dogs to detect sarcomas has not been previously studied. The objective of this study was to provide a first proof of concept about the potential of using sniffer dogs as a screening method to detect osteosarcomas. To this end, we trained dogs to discriminate osteosarcoma cell cultures from healthy MSCs cultures and then analyzed their ability to detect specific odor signals in new osteosarcoma samples (cell lines and saliva from patients).
Materials and methods
Cell cultures, saliva samples and ethics statement. A panel of primary, immortalized and cancer cell lines was used in training and/or testing experiments. The main features of these cell cultures are listed in Table 1. This panel includes two primary cell lines (OST-3 and OST-4) generated from osteosarcoma samples surgically resected at the Hospital Universitario Central de Asturias (Oviedo, Spain) as previously described [18][19][20] . OST-3 derives from a conventional osteoblastic osteosarcoma resected from a 10-year-old female patient and OST-4 corresponds to a dedifferentiated osteosarcoma from a 69-year-old female patient. The OST-3 and OST-4 cells used in this study do not accumulate more than 20 passages in in vitro culture. In addition, other 5 established osteosarcoma cell lines (143B, Saos-2, U2OS, G292 and MG63) originally obtained from the American Type Culture Collection were used in testing experiments. Since MSCs are the cell type of origin of most sarcoma subtypes, we used two cultures of human bone marrow-derived MSCs (BM-MSCs) derived from healthy donors www.nature.com/scientificreports/ as non-tumor controls. These control BM-MSCs were respectively a primary culture (BM-45) (Inbiobank, San Sebastian, Spain) and cell line immortalized through the overexpression of hTERT and the inactivation of p53 with the E6 antigen of the human papillomavirus 16 (MSC-2H6) 21,22 . All cell lines were tested to discard mycoplasma contamination using the Biotools Mycoplasma Gel Detection kit (B&M LABS, Spain). To collect samples, cells were seeded in 75 cm 2 flasks (Corning, Glendale, AZ) and cultured in DMEM (Thermo Fisher Scientific, Waltham, MA) with 10% FBS (Biowest, Riverside, MO), 1% Glutamax (Gibco, Thermo Fisher Scientific), and 100 U/ml penicillin/streptomycin (Gibco, Thermo Fisher Scientific) at 37 °C and 5% CO2. Once cultures reached 80% confluence, 4 ml of medium was collected in clear glass vials with screw caps (Supelco, Bellefonte, PA, USA) which were stored at − 80 °C until they were used in odor detection experiments. Saliva samples were collected from patients diagnosed with osteosarcoma at the Hospital Universitario Central de Asturias. All patients cited in the Hospital's medical oncology service as of July 2021 were invited to participate. Due to the low incidence of osteosarcomas, only two patients could be enrolled for this pilot study. Negative control samples were also obtained from healthy donors with no history of oncological diseases ( Table 1). Part of each sample was diluted 1:10 in the same culture medium used to grow cell lines. Both diluted and non-diluted saliva samples were aliquoted and stored in clear glass vials at − 80 °C. A serial number was written on each sample at the time of collection to identify individual information.
For training and testing experiments, aliquots of cell culture medium or saliva samples were thawed at 4 °C. Then, a sterile gauze was soaked with 0.1 ml of sample and placed in sample containers with perforated lids which, in turn, were inserted into cylindrical buckets so that the perforated lids protruded and exposed the odor of the sample (Fig. 1A). Thawed aliquots were kept at 4 °C and used for up to week.
Patient samples were obtained at the University Central Hospital of Asturias. All experimental protocols have been performed in accordance with institutional review board guidelines and with the Declaration of Helsinki and were approved by the Institutional Ethics Committee of the Principado de Asturias (reference CEImPA 2021.340). Informed consent was obtained from all participants.
Training and testing protocols with sniffer dogs were carried out in accordance with the institutional guidelines of the University of Oviedo and the Spanish legislation. Ethical review and approval was not required for the animal study because it involved client-owned animals with the best practice veterinary care and these animals were not subjected to painful or distressful protocols. Written informed consent was obtained from the owners for the participation of their animals in this study.
Dogs and experimental setup. Selection of dogs was based on the following inclusion criteria: (i) dogs had to be clinically healthy, (ii) they had to be regularly available for training and (iii) they had to be previously trained for scent-based searches. Regarding this last requisite, we have not found dogs previously trained for cancer detection in Asturias (a region of 1 million inhabitants), therefore, we decided to enroll dogs that have been trained and used in the search for missing persons, since they are familiar with training procedures. Therefore, two female Belgian Malinois dogs previously trained for the search and rescue of missing people were used in this study. They were a 1-year-old daughter (dog#1, Nai) and her 7-year-old mother (dog#2, Moon).
Training and testing experiments lasted for 16 months with several rest intervals of a maximum of 3 weeks when deemed necessary by the trainer. During these periods, the dogs were kept under appropriate conditions with veterinarian surveillance, as required.
In training and testing experiments up to four identical buckets containing sample containers were arranged in a row at one-meter intervals as previously described 15,23 (Fig. 1B). The dogs were free and not guided by the trainer during the search for the target specimens and each round of searching started with the trainer command "search". They were trained for a "sit stare" final response when finding a positive target sample (Fig. 1C). The criteria to define positive and negative detections was similar to that of previous studies 24 . Thus, a correct detection was defined as: (i) identification of the target specimen by sitting in front of the bucket that contained the positive sample and maintain this position for more than 2 s, which are considered as True Positive (TP) identifications, or (ii) sniffing while ignoring control specimens, which are considered True Negative (TN) identifications. An incorrect detection was defined as: (i) identification of the control specimen as the target specimen, considered as False Positive (FP) identifications, and (ii) sniffing without sitting in front of the target specimen, identified as False Negative (FN) identifications. Hesitations longer that 2 s before giving a response were also considered as FN or FP identifications depending on whether the sample was a positive sample or a control. As guiding principle for training, a correct detection was marked with a clicker and rewarded with food.
We defined a trial each time the dog moves along the line of buckets until it marks a positive. A session was defined as all of the consecutive trials completed by the dog. Training and testing sessions lasted between 30 and 40 min, contained between six and fourteen trials and did not occur more than twice a day, with at least 2 h between sessions. A video record was taken for most testing trials along with a written record of the dogs' behavior at each position. Nitrile gloves were used when handling samples and buckets. Sample containers and buckets were cleaned at the end of each session with 70% ethanol.
Training. The training of dogs to detect specific odor cues were structured in two phases (Fig. 1D). In the first phase, the dogs were trained to detect the smell of a reference substance not present in the human body. Thus, we use cedarwood oil to train dogs to search for specific odors without confounding their future searches for human-related scents. The dogs were initially rewarded for approaching, smelling and sitting with their nose close to a bucket containing the reference sample. Then, besides the positive sample, we sequentially added buckets containing control gauzes not soaked with any odorous substance until complete all four positions. When the dogs acquired the ability to mark positive samples while ignoring controls and all-blank trials, we move to the following stage. The aim of the second phase was to train dogs to detect specific odors associated to osteosar- Testing. Testing experiments were performed with four testing buckets placed in a row and containing either: (i) one cancer sample (from cell lines or saliva) and three controls or (ii) four control samples (Fig. 1B). We performed both non-blind and blind experiments, according whether the trainer were informed or not about the disposition of the samples. Results were recorded by an assistant located outside the testing room in a position where he could view the dog but the dog could not see him. The assistant informed the trainer about the results (Fig. 1E). In a first set of experiments aimed to test the ability of the sniffer dog to detect the different cell lines, it has been exposed to media from the different osteosarcoma cell lines in a sequential fashion, i.e. we performed testing sessions with media from the first cell line, before starting sessions using the second and so on. Then, we performed a set of trials in which the positive sample was randomly chosen by the assistant from the media of all cell lines. To gain insight about the limit of detection of tumor samples of the canine olfactory system, we also confronted the sniffer dog to media from OST3 cultures diluted 1:5, 1:10 and 1:50 times in fresh medium. As non-tumor controls in experiments using cell lines, we used the culture media from both, the cell line used in training experiments (MSC-2H6) and a new BM-MSC line (BM-45). After finishing testing experiments with cell lines, we also aimed to test whether sniffer dogs were able to detect specific odor cues in patient specimens. As a pilot study, we carried out testing experiments using saliva samples from two osteosarcoma patients as positive target and twelve saliva samples from healthy donors as negative controls.
As the training were done with culture media samples, we used the saliva samples diluted 1:10 in the same cell culture medium to discard any positive or negative effect of culture medium in dog responses. Finally, to further explore the ability of sniffer dogs to detect osteosarcoma saliva samples we designed testing experiments where Dog#1 was exposed to pools of samples containing an osteosarcoma sample mixed with 9 negative controls in similar proportions (positive pools) and/or pools of 10 negative samples (control pools) (Fig. 1E). Control samples used in each trial, as well as their position, were randomly selected by the assistant. In all cases at least 6 sessions and more than 60 trials were performed.
Results
Training. Both dogs were easily trained to detect the reference substance without hesitations. Thus, both dogs completed the first phase of the training with correct detection rates greater than 99%. During the second phase of training, Dog#1 progressed rapidly and showed a great ability to discriminate the culture media of the primary osteosarcoma cell line OST3. Although Dog#2 also demonstrated its potential to detect tumor samples, it sometimes entered stages of confused behavior with frequent failures. To register the level of detection ability reached at the end of this phase we performed blind experiments using the same sarcoma (OST3) line employed during the training and two different control BM-MSC cultures (MSC-2H6 and BM-45). Dog#1 was able to detect positive samples and discard negative samples with a sensitivity of 97.65% and a specificity of 98.57%, while Dog#2 was slightly less efficient and discriminated tumor samples with a sensitivity of 90.90% and a specificity of 84.78% (Table 2). Despite, having achieved high levels of detection, we decided to discard Dog#2 from further experiments due its irregular behavior. For instance, this dog found difficulties in pay attention to the sample container placed in the first position of the row or when positive samples are repeatedly placed in the same position in consecutive trials. Therefore, we accomplished testing experiments only with Dog#1.
Estimation of the limit of scent detection in osteosarcoma samples. Dog#1 was able discriminate tumor samples diluted 1:5 and 1:10 times from undiluted control with an efficacy similar to that shown with undiluted samples, both in non-blind and blind experiments. However, when 1:50 diluted samples were tested, the dog decreased significantly its ability to correctly detect tumor samples and therefore, the sensitivity dropped to 50% (Table 3, Fig. S1A). These results showed that culture media from tumor cells contained specific odor www.nature.com/scientificreports/ signatures at a concentration that was at least one order of magnitude above the limit of detection of the canine olfactory system.
Detection of common odor signatures in osteosarcoma cell lines.
Next, we exposed Dog#1 to culture media from a panel of primary and established cell lines not previously used during the training. In sequential testing sessions, Dog#1 was able to discriminate samples of OST4, Saos-2, 143B, U2OS, G292 and MG63 osteosarcoma cells from control samples with sensitivity and specificity rates between 95 and 100% in all cases, both in non-blind and blind experiments (Table 4). In these experiments, we did not find statistically significant differences in the ability of the dog to detect the different cell lines assayed (Fig. S1B). Relevantly, Dog#1 had a correct identification of all osteosarcoma cell lines in the first test it was exposed to the samples. Likewise, the dog correctly discarded the control MSC cultures used in these experiments ( Table 5). The sequential representation of the sensitivity and specificity rates obtained in consecutive sessions showed that these values were higher www.nature.com/scientificreports/ than 95% from the initial session for all cell lines (with the only exception of the sensitivity for OST-4, 89%) and remained essentially constant during all sessions performed (Fig. S2). Afterwards, we performed blind testing experiments using samples from all the osteosarcoma lines chosen at random in each trial. Similar to the results obtained in sequential detection experiments, Dog#1 detected randomly chosen osteosarcoma cell lines with sensitivity and specificity rates of 96 and 98% respectively ( Table 6). These results strongly suggest that osteosarcoma cell lines share common odor signatures that can be detected by a trained sniffer dog.
Detection of saliva samples from osteosarcoma patients. Finally, we aimed to test whether the common olfactory signature detected in cell lines can also be detected in patient specimens. In a pilot study, we found that the dog was able to discriminate saliva samples from two osteosarcoma patients from healthy controls with sensitivity and specificity values close to 100%, both in non-blind and blind sessions ( Table 7). As in experiments using cell lines, the dog correctly identified all tumor and control saliva samples the first time it was exposed to them (Table 5). Finally, Dog#1 demonstrated a similar efficacy in experiments in which the osteosarcoma sample was pooled with 9 negative controls (Table 7).
Discussion
In this study we took advantage of the privileged olfactory system of dogs 13 to detect specific odor signatures with diagnostic potential in sarcomas. Previous studies have already shown the ability of dogs to discriminate cancer-associated VOCs in different types of epithelial cancer 14,15 , however, this the first study demonstrating their potential ability to detect sarcomas. Cancer types with available early screening programs, such as breast, prostate or colon cancer, have significantly improved their survival rates [7][8][9] . On the other hand, sarcomas are, in general, difficult-to-treat tumors that often develop resistance to current treatments leading to the occurrence of relapses and metastases 2,26,27 . Therefore, the improvement of patient survival for sarcomas largely depends on an early detection at a more curable disease stage. Our study provides a first proof-of-concept that support the development of screening programs for sarcoma based in the detection of specific VOC profiles by sniffer dogs as a reliable non-invasive and costly-effective approach to favor early diagnosis for sarcoma patients. Our experiments using cell lines, both patient-derived primary cultures and established cell lines, suggest the existence of common odor signatures that can be detected by trained dogs. Several data support the use of culture media from these cell lines as an ideal starting material for training sniffer dogs. First, we and others have demonstrated that these low passaged primary cultures represent close-to-patient models that keep the most relevant genomic and functional alterations of the original tumors 20,28 . Therefore, it could also be expected that most relevant VOCs and odor signatures produced by metabolic processes occurring in tumors are also being produced in cell cultures. Moreover, cell lines, which are exclusively composed by tumor cells, do not contain potential confounding odors produced by other cell types or body fluids substances. Finally, it is well established that MSCs represent the most usual cell of origin for osteosarcomas and other types of sarcomas [3][4][5] . Therefore, cultures of healthy MSCs represent and ideal choice as non-tumor controls pairs for osteosarcoma cell lines in training and testing experiments. In line with our original hypothesis and our results, a few studies have used before cell lines and/or tumor biopsies from, breast, colon, ovarian or cervical tumors as tumor-only models to train cancer detection dogs with positive results 17,24,25,29 .
The protocols used in this study regarding sample preparation and disposition, dog handling and reward, and data recording were similar to those used in other studies 14,15,17,23,30,31 . After training with an osteosarcoma primary cell line, the sniffer dog was able to detect culture media from other 6 osteosarcoma cells lines not previously used during the training process with sensitivity and specificity rates between 95 and 100%. Moreover, the dog was also able to discriminate saliva samples from two osteosarcoma patients from saliva samples obtained from healthy donors with a similar efficacy. This high specificity and sensitivity rates are in line with the studies showing a more promising ability of sniffer dogs to detect breast, ovarian, prostate or lung cancer 14 .
The results of this study suggest that: (i) there are common odor profiles shared by cultures of osteosarcoma cells obtained from different patients; (ii) VOCs producing these common olfactory signals are circulating throughout the body and can be detected in easily accessible fluids such as saliva; and (iii) these results confirm that the use of media from cell cultures is a useful strategy to train cancer detection dogs. Besides, the experiments performed with positive culture media diluted in control media or with positive saliva samples mixed with healthy samples, revealed that the concentration of cancer specific scents were well above the limit of detection of the canine olfactory system. Moreover, in this study we conducted both, sessions where the trainer was informed about the disposition of the samples and sessions were this information was blinded to the trainer. Our results suggest that in an experimental setting, such as ours, where the dogs are off-leash during the trials, this fact does not have a relevant effect on the results.
Our experimental setup, where multiple consecutive tests were performed with the same positive samples, allowed us to obtain robust data on the sensitivity and specificity of the detection. This repetitive procedure also has associated risks, such as the possibility of a repetitive reinforcement learning of the positive samples by the sniffer dog. However, the fact that our sniffer dogs was able to detect positive samples with very high sensitivity and specificity rates from the first session and that these values remained essentially constant during all sessions performed (Fig. S2), suggest that the reinforced learning associated with the repetition of tests has not played a relevant role in our experiments.
To the best of our knowledge, this is the first study providing a proof of concept about the feasibility of detecting specific cancer scent in saliva. Being able to detect positive sarcoma samples in such an accessible fluid would facilitate the implementation of future screening programs. Osteosarcoma occurs predominantly in adolescents and younger adults between 10 and 19 years 1 . Therefore, screenings could be specifically targeted to school population in this age range. Moreover, we showed that sniffer dogs are able to detect positive saliva samples mixed in a pool of negative samples. If the results of our pilot study are reproduced in future studies analyzing larger number of samples, it can be explored the possibility of speeding the screening process by mixing samples from groups of schoolchildren and subsequently re-analyzing individually only the few mixtures that the dog marks as positive. Beside their possible use in target population screenings, other uses for trained cancer sniffer dogs could be speculated. For instance, scent dogs located in patient's associations could sniff patients in a regular basis in order to achieve an early detection of relapses in osteosarcoma patients who are in remission.
Although this approach using sniffer dogs could never be considered as a definitive diagnostic method, it could be used as an efficient, rapid and cost-effective pre-screening aid to detect possible cases in a well-defined targeted population and thus contribute to a more efficient use of current diagnostic methods.
Besides, we also hypothesize that combining the olfactory ability of dogs with analytical techniques may lead the way to synergistically improve the detection achieved by each method individually. Thus, the identification of specific compounds or VOC profiles using GC/MS may serve to both: (i) improve dog training to specifically detect these compounds; and (ii) to refine eNose sensors to design novel diagnostic devices applicable to the clinic in the future. www.nature.com/scientificreports/ The positive results obtained here must be interpreted taking into account the limitations of the study. An important limitation refers to the fact that only one dog was used in testing experiments. The inclusion criteria led us to select only two dogs. Both of them finished the training process demonstrating high percentages of sensitivity (98 and 91% respectively) and specificity (99 and 85%) in the detection of sarcoma samples. Unfortunately, one of them have to be discarded for testing experiments due to her irregular behavior in training trails. While this is clearly an insufficient sample size, the results obtained with the selected dog provide and initial proofof-concept about the existence of specific VOC profiles in sarcomas that can be detected by a properly trained dog. In order to establish reliable statistics about the efficiency of our training method and the ability of trained sniffer dogs to detect sarcomas, further research involving a considerable higher number of dogs is needed. Another important limitation of our study is that we used only two saliva samples from osteosarcoma patients to confirm in a clinical setting the results obtained using cell lines. Although, the positive results of this pilot study provides an initial evidence of the feasibility of detecting osteosarcoma-specific odors in saliva samples, conducting new studies that include a large number of saliva samples from healthy controls and patients with osteosarcoma is essential to demonstrate the potential of sniffer dogs to detect osteosarcoma in these types of samples. We have not conducted a sample size calculation in this study; however, our results may be of great use in estimating the number patient samples needed to achieve a significant statistics power in subsequent studies. Finally, to strengthen the results of future studies other distractors, such as samples from other tumor types and/ or from patients suffering other bone-related diseases, could be included as controls in training and testing trials.
Overall, our study provides a proof of concept about the existence of a specific cancer scent in osteosarcoma that can be efficiently detected by sniffer dogs trained with osteosarcoma cell lines. Moreover, VOCs producing this odor profile can be detected in easily accessible body fluids such as saliva, which may facilitate the development and implementation of rapid and cost-effective screening methods for the early detection of osteosarcoma.
|
v3-fos-license
|
2024-05-26T15:56:48.183Z
|
2024-05-21T00:00:00.000
|
270006367
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.pjms.org.pk/index.php/pjms/article/download/7871/2196",
"pdf_hash": "a6c53d74701d92c8e97eb141fec76d32b6f47b2f",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:753",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b36eb01f11f17b8b2433d14801423533ad95c4c5",
"year": 2024
}
|
pes2o/s2orc
|
Clinical efficacy of a new type of Sports Rehabilitation Therapy Bed
Objective: To prove that the “sports rehabilitation bed” is a device aimed at improving the precision of stretching, which can help to reduce the difficulty of rehabilitation therapy, cut down the manpower input of rehabilitation therapy, and shorten the therapy duration as well. Methods: This was a clinical comparative study. Twenty patients who underwent stretching therapy in Sichuan Province Orthopedic Hospital from June 2020 to August 2020 were randomly selected to carry out a control study on both lower extremities. The experimental group was given sports rehabilitation bed to assist rehabilitation therapy, while the control group was given conventional bare-handed stretching rehabilitation therapy. The stretching angle, stretching value, and the effective rate of stretching therapy between the two groups to analyze the clinical value of the new sports rehabilitation therapy bed. Results: The stretching angle in the experimental group when using the sports rehabilitation therapy bed for stretching was lower than the conventional bare-handed stretching in the control group (T<0, P=0.05), with a statistically significant difference; the stretching values of the experimental group were lower than those of the control group(P<0.01), with a statistically significant difference. Moreover, the response rate of stretching therapy in the experimental group was lower than that in the control group(P<0.05), with a statistically significant difference. Conclusion: Sports rehabilitation therapy beds can results in the advantages of effectively preventing iatrogenic injury in the process of stretching, and providing a more accurate and convenient stretching therapy method than the current commonly used bare-handed stretching for sports rehabilitation and intervention.
INTRODUCTION
Muscle stretching, whether it is active or passive stretching, is one of the conventional rehabilitation therapies for musculoskeletal diseases.The following rehabilitation methods are commonly used: Proprioceptive neuromuscular facilitation (PNF): A method that muscles are forced to contract to induce reflex self-inhibition, and then the muscles are relaxed by stretching.
Static stretching:
The method of stretching muscles to the extreme point and standing still.Dynamic stretching: the method of slowly moving the joint to the stretching position.Elastic-shock stretching: The method of making limbs move from the initial posture to the stretching posture by rebound movement.
In the process of diagnosis and treatment, the above methods are used alone or in combination.In any case, their rationale is to stretch the shortened or contractured tissues and lengthen them, so that subjects can regain the extensibility of soft tissues around joints, reduce muscle tension, improve muscle excitability and improve or restore the normal range of motion of joints.For example, the incidence of patellofemoral pain syndrome (PFPS) in the general population is as high as 22.7%, 1 which is significantly related to the lack of flexibility of quadriceps femoris (QF). 2 In this regard, stretching training of quadriceps femoris can effectively improve the flexibility of quadriceps femoris.However, the quadriceps femoris is a cross-joint muscle.In the process of knee extension, the antagonistic muscle of quadriceps femoris (AM) are biceps femoris (BF), semimembranous muscle (SM) and semitendinosus.During hip flexion, the antagonistic muscles are biceps femoris, semitendinosus, gluteus maximus (GM) and piriformis muscle (PM).The hip can only be effectively stretched to this muscle group if it is fully extended posteriorly and then flexed.In view of the linkage relationship between pelvis and lumbar vertebrae, when hip is extended, there will be corresponding pelvic forward extension and lumbar vertebrae backward extension if pelvic fixation is not sufficient, which will increase the compression force at the back of the waist, thus causing lumbar injuries.Therefore, the antagonistic action of antagonistic muscles should be limited during stretching.Besides, muscle stretching is also one of the commonly used methods in the field of sports training, which is widely used in improving sports performance.
In most studies, dynamic stretching is mainly used to warm up before sports, which can activate muscles and enhance their maximum strength and explosive power, 3 while static stretching is mainly used to recover after sports, which can help athletes improve the stiffness, elasticity and viscosity of myofascial, 4 relieve pain, 5,6 accelerate fatigue elimination and reduce the incidence of muscle injury. 7Still, a few scholars dispute this.Some foreign scholars have pointed out that many studies on stretching function lack the description of stretching details, 8 and only describe the stretching method, time and strength, such as "static stretching method, stretching for 30 seconds each time, repeating three groups, and resting for 30 seconds between groups", 9 in which the strength is mostly evaluated by subjective feelings, such as "stretching with mild discomfort". 10Whether or not there is a unified standard of body position during stretching, the difference in operator's proficiency and the quantitative evaluation of stretching degree may all affect the research results.But at present, the articles and textbooks on stretching do not involve this aspect.
There are still some problems in the research and clinical application of stretching, mainly manifested in the standardization of posture, the lack of accuracy of movements, and the lack of quantitative evaluation of stretching strength and angle, which may affect the rigor of research and clinical effect.Therefore, it is necessary to design a sports rehabilitation therapy bed to solve the above problems.The sports rehabilitation therapy bed cannot only strictly regulate the posture of the subjects during stretching, but also ensure the accuracy and effectiveness of stretching, and reduce the workload of the therapist.For inexperienced therapists, the sports rehabilitation therapy bed can help them make up for their own shortcomings and achieve a better stretching effect by using the treatment bed.
METHODS
This was a clinical comparative study.Twenty patients who underwent stretching therapy in Sichuan Province Orthopedic Hospital from June 2020 to August 2020 were divided into two groups by the control method of left and right lower limbs.All patients were divided into two groups according to the random number method: the experimental group (20 subjects were treated with left lower limb) and the control group (20 subjects were treated with right lower limb).The subjects in the experimental group were given muscle stretching therapy on the bed of sports rehabilitation therapy, while those in the control group were given bare-handed muscle stretching therapy.Meanwhile, before receiving therapy, the subjects underwent a muscle length test of rectus femoris, and measured the height h(cm) from the marked point of lateral femoral condyle to the examination bed, the angle between femur and bed surface, and the knee flexion angle using the Thomas test method.Then, the sports rehabilitation therapy bed was placed in the stretching posture of rectus femoris, and the rectus femoris stretching intervention was performed on the subjects.After that, the above-mentioned height and two angles were measured again, and the differences before and after intervention were compared.Ethical Approval: The study was approved by the Institutional Ethics Committee of Sichuan Province Orthopedic Hospital (No.:KY2022-029-001; Date: October 19, 2022), and written informed consent was obtained from all participants.
Inclusion criteria:
• Subjects aged 20-30 years • Subjects with complete medical history information, no history of neuromuscular or skeletal system injury, and high compliance; • Subjects without motor system disease within six months prior to participation in the intervention treatment; • Subjects who did not receive any intervention and treatment for a diagnosis of skeletal muscle, bone and joint within six months prior to participation in the intervention treatment; • Subjects who agreed to the study protocol and signed the informed consent form.
Exclusion criteria:
• Subjects who did not accept the study protocol; • Subjects who did not meet the inclusion criteria ; • Subjects who did not follow the prescribed protocol; • Subjects who failed to complete the trial for various reasons (e.g., adverse events, patient missed visits); • Subjects who voluntarily withdrew their informed consent; • Subjects who developed serious adverse reactions and discontinued the trial by the combined decision of the subjects or the investigator.sEMG (electrode disc diameter of 1cm, distance between two electrode centers of 2cm) was recorded by bipolar recording method.Electrode sheets were affixed according to the operation manual of the MegaWin ME6000 sensor.Every three electrode sheets were affixed to the most eminence of muscle abdomen of quadriceps femoris (target muscle), biceps femoris (antagonistic muscle) and tibialis anterior (synergistic muscle)) respectively.Besides, the connecting direction of the electrode sheets was parallel to the direction of muscle fibers.The EMG signals of quadriceps femoris, biceps femoris and tibialis anterior muscle were collected by a 16-channel Mega ME6000 surface electromyography system when the subjects stretched on the sports rehabilitation therapy bed for the first time and stretched by hand respectively.The degree of the knee joint of the subjects was extracted from the beginning of stretching to the onset of the stretching sensation.Furthermore, the EMG signals 30 seconds after the onset of the stretching sensation were recorded, with the Average electromyography (AEMG) as the index, and the obtained values were standardized.The two groups of patients were followed up for six months, and the follow-up work of all patients was completed by the same group of surgeons.Statistical analysis: All data in this study were statistically analyzed by SPSS21.0 software, and measurement data were expressed as ( ). Paired t test was used for intra-group comparison, and an independent sample t test for inter-group comparison.The statistical data were expressed as rate (%), and χ 2 test was used for the comparison of rates, with P<0.05 indicating a statistically significant difference.
RESULTS
The comparison of general data between the experimental group and the control group showed no statistical significance(P>0.05).Thigh circumferences on both sides of each subject were comparable, with no statistically significant difference(P>0.05),as shown in Table-I.
The stretching angle in the experimental group when using the sports rehabilitation therapy bed for stretching was lower than the conventional barehanded stretching in the control group (T<0, P=0.05), with a statistically significant difference.
In the experimental group, quadriceps femoris, biceps femoris and tibialis anterior muscle were stretched by sports rehabilitation therapy bed, and their stretching values were lower than those of the control group(T<0).The stretching values of the experimental group were lower than those of the control group (P<0.01), with a statistically significant difference.
The active range of motion before and after stretching in the experimental group was lower than New Type of Sports Rehabilitation Therapy Bed
DISCUSSION
These results suggest that, in sports intervention and rehabilitation therapy, due to the limitation of the subjects' postures and the limitation of joint activities through the fixed movable foam axis, the sports rehabilitation therapy bed minimizes the participation of interference factors such as antagonistic muscles and synergistic muscles, which is more accurate than the traditional freehand stretching technology.There is also the advantage of stretching the target muscle at a relatively low angle.
Muscle stretching is a physical therapy, which is a kind of sports therapy to improve the effectiveness of rehabilitation therapy mainly by exercising the most vulnerable muscle groups.Stretching therapy can effectively promote local blood circulation of muscles, accelerate the dissipation and absorption of inflammatory substances and metabolites, and repair damaged soft tissues.][13][14][15] In the current clinical rehabilitation muscle stretching therapy, it mainly relies on the strength of the therapist to help the subjects fix or move their limbs.What's more, both the stretching angle and the stretching sensation are described by the subjective proprioception of the subjects, which cannot guarantee the standardization of the posture and the accuracy of movements, but also increase the workload of the therapist.However, if the stretching therapy relies entirely on the therapist's experience, the risk of injury or re-injury to tissues like bones and joints will be increased due to the existence of individual differences.The results of this study show that for the subjects in the experimental group treated with a new type of self-developed sports rehabilitation bed, their efficacy after treatment was significantly higher than that of the control group with conventional freehand stretching.
Table - I
: Comparative analysis of general data of the two groups ( ).
Table -
III: Comparative analysis of muscle stretching values between the two groups.IV: Comparative analysis of an active range of motion before and after stretching between the two groups.
Table - V
: Comparative analysis of the response rate of stretching therapy between the two groups.
|
v3-fos-license
|
2021-11-04T00:09:29.515Z
|
2021-04-01T00:00:00.000
|
240561571
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://journal.uhamka.ac.id/index.php/ellter-j/article/download/5378/2297",
"pdf_hash": "738dd73e8691dc5c67e30e91ceb815ed5c82e32a",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:754",
"s2fieldsofstudy": [
"Education"
],
"sha1": "262dc1d9c211f95f868f390ed282b615e6108ed2",
"year": 2021
}
|
pes2o/s2orc
|
TEACHERS’ PERCEPTION IN USING CALL AND TEACHERS’ HABIT IN TEACHING ENGLISH OF SECONDARY SCHOOL IN JAKARTA
This study intended to know the correlation between teachers’ ability on using CALL and their habit in teaching secondary school in Jakarta. The independent variable (X) is the teachers’ perception in using CALL; meanwhile, the dependent variable (Y) is teachers’ habit in teaching. The research used quantitative method that applied questionnaire as the instrument to collect the data. The population in this research was 100 teachers, but the writers only took 80 teachers as the sample of this research. The writers used random sampling technique. To obtain the data of X variable (teachers’ perception in using CALL) and Y variable (teachers’ habit in teaching), the witers distributed quistionnaire consisted of 40 items. The writers used Pearson Product Moment’s formula to calculate the correlation between X variable and Y variable. After analyzing the data, it was found that both sample data (X and Y) were normally distributed because χo < χ 2 t X data (3.91 < 9.49), and Y data (8.13 < 9.49). The correlation was found r-observed = 0.333 and r-table = 0.217 in a significant 0.05 with n = 80. Because r-observed < r-table, Ho was rejected and HI was accepted. It means that there was a significant relationship between teachers’ habit in teaching English with technology and their competency in using CALL.
INTRODUCTION
One of the advantages of learning English is people can follow the development of technology. Mastering English becomes the most important factor for people who do not want to be left behind by the development of science, trade, and technology including the internet since technology has a lot of impact on people's lives (Nila, 2013). It will be beneficial for students and teachers when they learn English as they can learn many things including by following the current development of technology. One of the technologies which mostly used by people is a computer. The computer helps people in many ways, especially in education. The education system has now changed dramatically when people have started using computers. The use of computer has been widespread and growing in schools and homes. Computer brings innovation in the learning and teaching process. The innovation of computer is affected to the teachers to help them delivering the material. In the past, the teacher only used the textbook as a media for teaching, but now they can utilize the computer to teach English skills. Additionally, technology is the way for increasing interactions between teachers and students in the classroom and used by teachers to find out the learning style that students might like (Schroeder, 1993;Cahyani & Cahyono, 2010).
Moreover, Computer Assisted Language Learning (CALL) is one of the technologies that language teacher can use in teaching and it is established for better language learning. It can have a major impact on the teaching and learning language. Al-Jarf (2005) defines CALL is an approach to language teaching and learning in which computer technology is used as a help to the presentation and evaluation of the material to be learned. The computer also helps the teachers to do some presentations in and out the classroom. Computer really helps the teacher to make the teaching more practical because the teachers do not have to use a lot of papers in giving the material. Many studies explain the effectiveness of using technology in teaching and how technology helps in developing the students learning achievement (Frigaard, 2002;Schofield & Davidson, 2003;Miner, 2004;Timucin, 2006). Moreover, CALL is students' necessity which provides many purposes and information in order to practice, and it is as the facilitator which helps the learners to communicate with each other in a far away (Talebinezhad & Abarghoui, 2013;Derakhshan, 2018). AlMansour and Al-Shorman (2011) analyzed the CALL in writing skill that the students were capable to improve their skill in making paragraphs by checking the grammar through CALL. Using CALL in teaching English becomes more beneficial than others, and it has a good impact on students' motivation and ability to speak (Lewis & Reinders, as cited in Reinders and White, 2010). Moreover, CALL can be fun for students like playing games because playing a game in the learning process is one way for learners to learn a language with another genuine material (Reinders and White, 2010). It will also be very beneficial for them because learning can be more alive by using a computer. For example, teachers can show to the students how the real conversation of the native speaker through video or audio is. Therefore, teachers' ability in using CALL is very important because the development of technology is rapidly change.
However, the school, patterns, structure, and curriculum content do not determine students' process and learning outcomes, but the competencies of the teachers who can teach and guide students especially on using technology determine the learning outcomes. (Hamalik, 2009). Competent teachers will be able to create an effective learning environment. Moreover, they will be able to manage the classes so that the students produce optimal learning outcomes. The teachers' skill and knowledge are not enough to teach well; the teacher also need to have pedagogical competence (Hotaman, 2010). Pedagogical competence is a specific competence that distinguishes teachers from other professions, which demonstrates the ability of teachers to organize learning material, so it can be easily understood by the learners (Jahiriansyah, Retnowati, 2013;Rosnita, 2011). Teacher pedagogical competency variables are measured through six indicators. One of which is the ability of teachers to utilize learning technology. It means the teacher can utilize learning technology as a learning support tool, so learning has become more effective and not boring (Mulyasa, 2008). A competent teacher should know what she or he has to do in teaching process, such as selecting the teaching materials, explaining English subject properly, using technology, planning and managing the learner activities, monitoring learning process in the class, guiding the student's discussion, interacting with the students, and arranging the evaluation for the students. Vol.2, No.1. April 2021, 14-20 DOI: 10.22236/ellter.v2i1.5378 In fact, although a teacher knows how to be a competent teacher, in daily teaching some teachers cannot show their competency especially in following the development of technology. The writers believe that teachers' habit can influence their competency in teaching. Habit is a person's activity that is done repeatedly and unconsciously, and it is automatic routine behavior that is repeated regularly without thinking (Butler, 1995). It can be said that habits are activities that are carried out continuously and occur on their own without being able to be controlled. Habit is activity patterns that have been regulated and carried out continuously (Roecklein, 2006). It means habit is activities carried out naturally by ourselves. Previous studies have explained the effect of teachers' habit and their pedagogical competence (Emiliasari, 2018;Andini & Supardi 2018). Senior teachers which are usually developing curriculum, making lesson plan, and understanding students' characteristic, they are better in managing the classroom than the junior teachers. Meanwhile, junior teachers which commonly used technology in their life, will be easier in following and implementing the progression of technology in the classroom than senior teacher. It is because of the habit from the teachers to fulfil their teaching competency. In sum, the habit of the students that mostly utilize technology in their generation must be followed by their teachers in maximalizing the development of technology in their class. Like their students, teachers must utilize technology in their life as it can be their habitual things in the class.
METHOD
The writers used quantitative research for this study. Quantitative research is the process of collecting, analyzing, interpreting, and writing the results of a study (Creswell, 2002). The design of the study was correlation design to indicate the correlation analysis on data pairs of two variables. Correlation analysis is a term used to indicate the correlation or relationship between two (or more) quantitative variables (Gogtay & Thatte, 2017). The variables in this study are teachers' perception in using CALL and pedagogical competence. The population of this research was the teachers from secondary school in Jakarta. The total amount of the population was about 100 English teachers from junior and senior high school in Jakarta. The writers took around 80 teachers as samples of the research because the total of teachers who were response were only 80 teachers.
The instrument of the research was questionnaire. The writers used questionnaire in order to get the data of the teachers' perception on using CALL (adopted from Abdulwahed et al., 2010) and the teachers' habit in teaching (adopted from Nurfadillah, 2015). The questionnaire measures some indicators whether in perception on using CALL and teaching habit from the teachers such as using CALL in teaching language, teachers ability on using CALL, and barrier on using computer, the ability to manage learning, understanding of the learners, design of learning, utilization of technology education, evaluation of learning outcomes, and development of learners. The questionnaire consisted of 40 items with 20 items for measuring teachers' perception on using CALL and 20 items for measuring teachers' habit in teaching English. In analysing the data, the writers first checked the normality of data distribution by using Kolmogorov-Smirnov test. Finally, the writers tested the hypothesis for the final data using Pearson Product Moment.
FINDINGS
The writers collected the empirical data of two variables; teachers' perception in using CALL (X variable) and teachers' habit in teaching (Y variable). Before doing analysis about the correlation between the two variables, the writers did the pre-requisite test such as the normality test. This test used Kolmogorov-Smirnov in finding normality of data distribution. The result can be seen below: .073 c Based on the table above, the Asymp Sig. (2-tailed) is 0.073 which means that it is bigger than 0.05. It can be concluded that 0.073 > 0.05, and it means the data was normally distributed. Therefore, the next test could be held to get the final score of the correlation between variables. To answer the hypothesis between teachers' perception in using CALL and teachers' habit in teaching, the writer used Pearson Product Moment correlation. The formula of Pearson Product Moment is as follow: The result of recapitulation data of two variables are as follows: = . Vol.2, No.1. April 2021, 14-20 DOI: 10.22236/ellter.v2i1.5378 Based on the calculation above, it can be concluded that the result of correlation by Pearson Product Moment correlation was 0.333. Otherwise, robserved was 0.333, and rtable was 0.217 in the level significance. The result was ro>rt. It means that Ho is rejected, and Hi is accepted. To make sure the significance of relationship between teachers' perception in using CALL (X Variable) and teachers' habit in teaching (Y Variable), the writers did t test analysis.
ELLTER-J
Therefore, it is calculated that: The result of calculation is 2.21. It means that there is a significant correlation between teachers' perception on using CALL, and their habit.
DISCUSSIONS
The objective of this study is to get the empirical evidence whether or not there is a significant relationship between teachers' perception in using CALL and teachers' habit in teaching English of secondary school in Jakarta. According to the result of the data description, it can be known that the result of robserved is 0.333 and rtable is 0.217 in the level significance of P = 0.05 because robserved > rtable, Ho is rejected and Hi is accepted. In short, there was a significant relationship between teachers' perception in using CALL and teachers' habit in teaching English in secondary school in Jakarta.
Habit can influence the future of someone. It is automatic routine behaviours that is repeated regularly and continuously (Butter, 1995;Roecklien, 2006). If someone is accustomed to do good activities as their habit, they can create good competency of their life in the future. It is in line with the result of this study that the habit of the teachers who always learn to develop themselves in their competency as a teacher, their ability in using technology especially CALL is also appropriate. Teacher' habits in teaching is very influential on the success of learning in the classroom. Teachers always have to develop their skill in teaching because what their students need are their teachers as source of learning. Their students hope their teachers can know anything especially in using technology that is happening right now. Although a teacher is a senior teacher who has been teaching for some years, the new generation have to be taught by following the development of the students' era. Good habits will have a good impact on the development and results of students learning in the classroom.
Moreover, the computer is one of technology that teachers can use it while teaching in order to achieve the learning easier and more effective. The teacher cannot teach the students with conventional media. The teachers have to be more sophisticated than their students. Using computer in the classroom is one of tool which helps the teachers to deliver the material to the students. Nowadays, computer becomes a useful technology to work or study. It also can be utilized for everybody, especially teachers and students. CALL is one of good development of media that is good in assisting teaching and learning language. Andini & Supardi (2018) also proved that the good competency in using CALL can influence EFL learners in learning language. Park & Son (2014) add that some teachers in Korea had positive and favourable attitudes toward the use of the computers. It is because they realize that technology grow so fast, so they as teachers have to accustom with technology or make technology as their habit in life to avoid clueless using technology in the classroom.
CONCLUSIONS
In conclusion, the findings and the discussion discover teachers' habit in teaching English with technology has relationship with their competency in using CALL. Being a friend with technology can have good result in using some modern media in teaching English. Teachers just make using technology as their habit in daily activities, so their curiosity towards technology will be enriched especially in teaching their students as their obligation. As teachers know that delivering good knowledge is one of their responsibility and credibility of their career as a teacher. Therefore, they should gain more their competency to be competent teachers because habitual on using CALL can affect their ability and understanding of educational technology. In the other words, the more often technology used the more proficient teachers use it.
|
v3-fos-license
|
2023-08-05T05:05:17.574Z
|
2023-08-03T00:00:00.000
|
260483777
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "2a7af5e35bb55c708fab5a8875f88ec9f4a193d9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:756",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "2a7af5e35bb55c708fab5a8875f88ec9f4a193d9",
"year": 2023
}
|
pes2o/s2orc
|
Plant medicine usage of people living with type 2 diabetes mellitus in Belize: A qualitative exploratory study
Background Type 2 Diabetes Mellitus (T2DM) is a primary cause of death in Belize, a low-income country with the highest rates in Central and South America. As many people in Belize cannot consistently access biomedical treatment, a reality that was exacerbated by the COVID-19 pandemic, plant medicine usage is estimated to have increased in recent years. This exploratory study seeks to understand which plants are being used, patterns of usage, and the state of patient-provider communication around this phenomenon. Methods Implementing a Constructivist Grounded Theory qualitative design, the research team conducted 35 semi-structured interviews with adults living with T2DM, 25 informant discussions, and participant observation with field notes between February 2020 and September 2021. Data analysis followed systematized thematic coding procedures using Dedoose analytic software and iterative verification processes. Results The findings revealed that 85.7% of participants used plants in their T2DM self-management. There were three main usage patterns, namely, exclusive plant use (31.4%), complementary plant use (42.9%), and minimal plant use (11.4%), related to factors impacting pharmaceutical usage. Almost none of participants discussed their plant medicine usage with their health care providers. Conclusions Plant species are outlined, as are patients’ reasons for not disclosing usage to providers. There are implications for the advancement of understanding ethnobotanical medicine use for T2DM self-management and treatment in Belize and beyond.
Introduction
Belize Diabetes Association and requests by readers for access can be made to them. They are available through email to the Belize City office: bdabelize@hotmail.com and further supporting information can be requested from the corresponding author: Lindsay.Allen@umanitoba. ca. Please note that given the qualitative grounded theory methodology, and given the research team members positionality, accessing the original data set does not guarantee that the analysis can be replicated; replicability is derived from a very different framework and is much more applicable to statistical analysis. Qualitative research has numerous rigour checks which we could provide justification and references for, as needed. Further, the REB first approved the larger program of research in 2019, and it would be difficult to now apply for an amendment to both the original study, as well as its extensions and delegated additions, to change the REB to include the storing the data in an online repository. The original study period was closed last year, as per the REB standard after one additional year extension period.
results in T2DM studies specific to ginger, cinnamon, and moringa, though there are many under-researched plant medicines with significant potential [22,23].
The purpose of this exploratory study is to describe current practices regarding: 1) the plants people in Belize are using as medicines to manage T2DM; 2) the patterns of plant medicines usage; and 3) the state of patient-provider communication around plant medicine usage. This research builds knowledge on the experiences and needs of people living with T2DM as a step toward grappling with the rising prevalence of T2DM and its life-threatening complications. It informs a policy direction toward culturally safe care in Belize and similar settings, while also providing a rationale for increased dialogue between otherwise siloed (e.g., Western, Indigenous, formal, informal) systems and practice of medicine and health care.
Inclusivity in global research
This study was spearheaded by the Belize Diabetes Association (BDA), a non-profit organization that provides subsidized glucometers and other supports to Belizeans living with diabetes. The World Diabetes Foundation and the University of Manitoba (Canada) funded the research. Local health system designers, analysts, and administrators were collaborative and consultive partners, including employees from the Belize Ministry of Health (MoH) and the National Health Insurance (NHI) offices. The local chapter of the Pan American Health Organization (PAHO) collaborated in the development of the project. The Steering Committee included 14 people from the BDA (3), the MoH (3), PAHO (1), local health care providers (5), and administrators (2). A Belizean research coordinator, two researchers from the University of Manitoba, and three local interviewers − one of whom also specialized in data management − formed the research team. Research relationships between Belizean and Canadian team members have been ongoing since 2016.
Local Indigenous and non-Indigenous people leading the University of Belize, the National Institute for Culture and History, BDA offices, and several satellite health care services consulted in the project. Capacity-building was built into the project to help enhance skills for local and Indigenous interviewers who played an important role in data gathering. Three of the authors are local collaborators who reside in the country where the research was conducted, and they are members of the affected communities. The local Steering Committee designed the aims, questions, and priorities of the research through numerous meetings arranged by the local research coordinator. The methodology was arrived at through discussions in this series of consultations under the leadership of the local Steering Committee.
Ethics
All research was approved institutionally by the University of Manitoba Human Research Ethics Board (HS23313 (H2019:406)) and (HS23931 (H2020:229)). Further, the oversight of the Belizean Steering Committee required and ensured that local ethical and cultural protocols appropriate to the setting and context led the process, serving as an interim ethics committee while the country is in the process of developing − but does not yet have − a formal organizational body for these purposes. The informed consent process included an initial conversation between local liaisons and potential participants who were given research project and recruitment information. After they indicated they were interested, on a different day or later the same day, the participants presented at a designated community-use office space. Prior to commencing the interview, the interviewer and interviewee had an informed choice discussion, covering everything on the consent form verbally, with the interviewer using active listening skills and checking it made sense to the participant, with time and space allotted for questions. All participants subsequently signed written consent forms prior to interviews. All participant information was de-identified to ensure privacy and confidentiality. Additional information regarding the ethical, cultural, and scientific considerations specific to inclusivity in global research is included in the S1 Checklist.
Design
To address the three study objectives, we used a qualitative Constructivist Grounded Theory methodology [24,25]. Grounded Theory methodology was deemed appropriate because it develops theory from within the parameters of the local context, iteratively comparing themes that emerge from thick data, allowing for inter-participant nuance and intra-participant complexity while evolving a clear understanding of main phenomena at play in this environment [24,26,27]. Qualitative semi-structured interviews were conducted with individual participants, and meetings were conducted with key informants during site visits.
Participants
Eligibility criteria required that study participants be adults of 18 years or older; diagnosed with T2DM; living in Belize; willing to participate; and able to converse in English (the national language), or through Mayan or Spanish interpreters (provided). Eligibility criteria for the key informants required that they be employed in health provision, health education, health administration, intangible cultural heritage, ethnobotany, plant medicine practice, or Indigenous or other cultural organizations.
Data collection
Snowball and purposive sampling were used to recruit 35 participants through the mobilization of the Steering Committee's networks, to the point of data saturation and beyond [28]. Another 25 informants engaged in informal conversations at site visits throughout Belize, as per Kovach (2021) [29]. Recruitment for informants occurred through the research coordinator's outreach via a nation-wide campaign of phone calls, emails, and in-person visits to relevant organizations. Site visits included meetings at the offices of the National Institute of Cultural Heritage, the Belize Ministry of Health, the National Health Insurance office, the Belize Diabetes Association (Punta Gorda, Dangriga, Belize City locations), the Punta Gorda Polyclinic, the San Antonio Clinic, health administration offices (Punta Gorda, Dangriga, Independence, Belmopan), the University of Belize, and in community. These meetings were not audio-recorded; for these encounters, methods included participant observation with field notes [30].
As this study was part of a larger program of research, the interview questions originally were based on the Diabetes Quality of Life Questionnaire, which was pretested for cultural saliency then modified to include additional questions about plant medicine use, spiritual and religious practices, as well as experiences of COVID-19, as per participant feedback on the measurement tool. One of the benefits of semi-structured interviews is that the participants can lead the data generation in new and surprising directions to provide insights not anticipated by the researchers-this is a sign of quality in qualitative knowledge production [24,30,31]. The whole question area around plant medicines was driven and developed by participants' responses and feedback; further, Grounded Theory explicitly sets out to construct theory directly from participants lived experiences. Question areas covered in the interviews included social locations and influences, experiences and perceptions, root causes, priorities and routines, challenges, spiritual and mental aspects, formal and informal care and services, programming and education, vision of the future, and space to discuss anything else as desired.
Audio-recorded, semi-structured interviews took place between February 2020 and September 2021, each lasting between 30-90 minutes. These were transcribed verbatim, incorporating memo-writing to connect data from field notes [24,32].
Data analysis
Literal codes, focused codes, and analytic categories were developed by LPA, LE, and ARH in a systematized order using Dedoose analytic software [24,32]. Literal codes refer to coding direct quotes of what is said in its immediate literal sense. Focused codes refer to denying distractions from the topic-in this case-anything not specific to plants use. Analytic categories refer to the emerging themes and subthemes. In accordance with Charmaz (2014) Constructivist Grounded Theory framework, this involved examining each line, each section, each interview, and all data from all sources together to understand the main themes and subthemes, as well as underlying meanings, patterns, processes, and assumptions, while keeping the human story central to the analysis. We reached a consensus on the coding tree through research team discussions. The rigor of the analysis was ensured through holding emic-etic discussions with the Steering Committee, practicing continuous reflexivity, questioning the emerging theory, and seeking divergent data [24,29,32].
Results
The interview participants and informants provided interesting data regarding their T2DM health practices. The two main themes that emerged were the widespread pervasiveness of plant medicine usage, as well as the strong tendency of non-disclosure regarding said usage in patient-provider communication.
Participants were from the five of the six districts of Belize − all but the least populated district of Orange Walk due to limitations to travel imposed during the COVID-19 pandemic. The mean average age of interviewees was 54 years old, and the sample's age range was between 34 and 89 years old. Demographic characteristics of the sample are presented in Table 1. Of the 35 interviewees, 30 (85.7%) reported using plant medicines. There were three main usage patterns, namely, exclusive plant use (31.4% of total sample), complementary plant use (42.9% of total sample), and minimal plant use (11.4% of total sample). All but one of those participants avoided disclosing usage to their health care providers.
"There are so many that grow here: Pervasive usage of local plant medicines
Participants were asked to list the plants they used for their diabetes self-management. The plants that were reported are included in Table 2. Numerous plants were understood to help with directly lowering levels of blood glucose and/or alleviating T2DM symptoms, such as numbness in the extremities, sluggish circulation, skin irritations, sleep disturbances, and fatigue. Plant medicine usage was organized into three categories, namely, those who exclusively used plants, those who complemented plants with occasional pharmaceuticals, and those who used pharmaceutics primarily but complemented them with plants. Just as there was a spectrum of degrees of plant medicine usage, there existed a spectrum of practitioners, from backyard garden hobbyists to certified herbal doctors who held esteemed lectureship positions for academic audiences.
3.1.1. Exclusive plant use. The first pattern of usage that emerged from the data analysis was the category of 11 participants who exclusively used plants for T2DM. Creole Woman 3, for example, stated that she refused to use pharmaceuticals. With the help of a practitioner Creole Woman 3 could not specify which combination of plants she was using, having relied on her herbal doctor − who was a retired medical doctor-to know. Despite not knowing the details, she was certain that they helped her manage her T2DM symptoms.
Garifuna Man 2 explained that he had tried various T2DM pharmaceuticals but had suffered from a long process of incorrect discernment of suitable dosages, and he grew wary of the expense and the side effects, as he found the pharmaceutical pathway had worsened his quality of life. He stopped using pharmaceuticals altogether and made his own eyedrops and pain medications from local plants. He shared: "That start to make my eyes heal. . . All the medication I was getting, it wasn't working. . . I was using, like three medications. . . for the pain. . . Herbs, for that purpose, they are very, very effective." Participants who fell in this category still interacted with their physicians, but they assumed more autonomy over their T2DM management, using plant medicines they found accessible. Economic barriers, mistrust of pharmaceuticals, dislike of side effects, and perceived lack of medication efficacy (perhaps compounded by irregular access) were all factors.
Complementary plant use.
The second category included people who preferred plants but complemented them with pharmaceuticals in 15 participants, such as Garifuna Woman 2. She reported that she used pharmaceuticals when they were available in her local clinic, but supply was inconsistent, so she accumulated plant medicines from the garden and trees in her yard, as well as from her neighborhood and social networks. The plants (raw matter or prepared) cost her less money and were more reliably accessible than pharmaceuticals. Garifuna Woman 2 pointed to nearby, imparting: "I have fever grass tea that I buy, and sage. . . I use moringa. . . It's a flower, white. That's it right there. And the soursop too. . .Whenever somebody else come tell me about the herbs, I'll buy it, and I drink it, and it helps." She was not concerned with the lack of regulatory bodies or standardized dosages; she placed her faith in plant medicines categorically, trusting in informal knowledge-sharing, and not necessarily distinguishing between different plants. Trust in the source of plant medicines (e.g., as 'of-the-land,' 'natural,' 'God-given') played a significant role in participants' relationships between self-and-medicine. When they could interact directly with plants (e.g., locating, gathering, preparing), they felt more empowered than when they tried to access pharmaceuticals only to be confronted by multiple barriers (e.g., supply-chain, economic).
Garifuna Man 1 had been diagnosed only days before his interview. He stated that he wanted to use insulin for as short a time as possible until he adjusted; he hoped this would only take a matter of weeks. He shared: "I am on insulin, but I'm trying to ween myself off of that. I'll be trying neem." He had already contacted a well-known Belizean herbal doctor of his own cultural heritage (Garifuna) to control his blood glucose because he considered plants to be safe, accessible, and effective. He explained that he did not want to depend on pharmaceutical companies from foreign countries but rather on medicines found on the land around him.
Creole woman 5 described her preferences for herbal medicines that empowered her by saving her money and enabling her to avoid clinics, to which she expressed a strong aversion. She lived near numerous plants: "I take a lot of natural stuff. I use turmeric, ginger, cinnamon, moringa, things like that. . . I'm using it as a powder, but my neighbor has a lot of moringa trees, so, I will go and pick the seeds because the seeds are good too. And the leaf. I would dry it, as you dry tea. Then you drink that. . . Well, I haven't been to the doctor for diabetes in a long time." Informants shared how during the COVID-19 pandemic, clinics drastically reduced their hours and services. Most participants reported not seeing a health professional for as much as two years, whereas routine T2DM care is typically set to the standard of a check-up once per three months including a A1C blood glucose-supplies for which were strained even before the pandemic. The void in care prompted deepening reliance on local plant medicines.
Minimal plant use.
The third category of usage consisted of 4 participants who were dedicated to taking prescription medications but still used plant medicines on occasion.
There was little known about contraindications or interactions with her medications or otherwise. Garifuna Woman 3, for example, had a complex case of T2DM with co-morbidities. She relayed that she depended on her prescriptions but also used plant medicines for blood glucose levels, dengue fever, and high blood pressure without discerning the effects of mixing medicines. Garifuna Woman 2, a retired nurse with a relatively new diagnosis, said that she was committed to following her doctor's orders, including taking daily medications. She also kept an herb in her garden and regularly used plant medicines growing in the surrounding environment. She described: "You hear about all types of herbal medication that you can take along with your pills. All kind of herbs. . . I drink the noni. . . for diabetes. . .the moringa, . . .the gumbolimbo bark to make tea with it. This tree they tell me about it, and I use it. But I don't know if it helps because I still take my medication. . . I am careful. My thing is, if I don't take my diabetic medication, my organs will damage quick. So, I stick to my pills. . . I stick to it. I take the herbal tea as a complement, but I stick to my medication." Across all three categories, there was near consensus on the perceived safety of using plants as T2DM medicine. One participant, East Indian Man 3, stood out. He reported that though he used herbs, he had concerns regarding the pervasive phenomena of self-prescribing.
"Everybody has problems with diabetes. A lot of people here have problems with it, and they mentioned that they are drinking herbs, . . .but they don't come and check again. They say that is helping, . . .[but] they still have to come in [to get checked]." He worried about unknown side effects of plant medicines, saying: "That is frightening because there is not anyone to guide you to say only drink this amount of bush medicine. You need someone to give you specifics." Key informants agreed that plant medicine usage was pervasive. They wanted to see more support for botanical knowledge production and dissemination (e.g., to promote initiatives in backyard gardening, medicine and food security and sovereignty, sustainable eco-tourism, higher learning opportunities). The COVID-19 pandemic heightened the desire for improving domestic food and medicine systems, as border and port closures exacerbated pre-existing supply shortages (e.g., T2DM medications, glucometers, glucose test strips, quality proteins, complex carbohydrates). One key informant (Garifuna) kept a bookcase of herbal preparations ready for the many relations who came to her for informal health care-she actively resisted feelings of helplessness that would otherwise subsume her, feeling happy to be able to supply them some relief for their ailments at no cost.
There was another dimension to the belief and trust in plant medicines. East Indian Man 4 said: "The tree is the healing of the nation. I believe there is a bush for every sickness on this earth." Similarly, Mestizo Woman 2 expressed: "God put all trees and plants on this earth because all of them have something for all kinds of sickness. I think natural remedies is the best medicine." Plant medicines were experienced as, believed to be, and valued as empowering and life-affirming, culturally and spiritually congruent, and dependably available even under the toughest of COVID-19 restrictions.
"They don't really listen:" Difficulty with patient-provider communication
The second main theme that emerged was that of difficult communication between health care providers and people living with T2DM in Belize. While most participants told interviewers that they used plant medicines, they did not disclose this pertinent health information to their care providers. When asked if they talked to their doctors about using plant medicines, participants across the three categories of usage typically did not. Mestizo Woman 3, for example, elaborated on the unspoken norm of 'don't ask, don't tell:' "I take moringa. I take it every morning for tea. I feel good when I take it. No, I don't tell the doctor, and he did not ask me if I take it. But when I take it, I don't have any problems." There were participants who shared that they believed that their provider would be unreceptive and even angry if they told them, and previous negative encounters led to patient fear of disclosure and omissions in reporting. Though many of the plant medicines have not been studied via double blind randomized controlled trials, participants were not concerned about this lack of evidence. Unbothered, they were not waiting for external validation, nor did they feel empirical evidence was crucial to their usage and understanding of efficacy.
In some instances, it was clear that communication was already strained by cultural differences and class tensions. Creole Woman 1 described her dilemma with her doctor (of a different culture and ethnicity): "I always want to ask her, but she's so delicate, you have to think what you [are] asking her. . . She doesn't tell me anything." She elaborated on how she felt that she was perceived as ignorant and inferior, to the point she had stopped asking her doctor questions altogether. Garifuna Man 2, told a story about getting the run-around for years when trying to access T2DM services, and who had ended up partially blind due to botched surgeries and complications, surmised: "Most of the doctors. . .they don't really listen to the patient. . . or explain what exactly is the problem." Seven participants advocated for more active listening (e.g., asking for clarification, reflecting) to ensure better patient-provider communication. Garifuna Man 1 made a point of booking multiple appointments close together to ensure time to ask his questions. In instances when his physician was preoccupied during his appointment with phone calls and paperwork, he waited in the office until the end of the day to ask his questions. Informants from health offices often cited shortages of medical supplies, educational pamphlets, nutritionists and other specialists, and staff in general as part of the larger issue.
Participants wanted more open dialogue about the nuances of managing multiple prescriptions and medicinal botanicals. Garifuna Man 2 described an issue he had when he asked his provider about a new prescription. He had read about potential adverse effects online, feeling concerned when he learned the drug could weaken the immune system. Rather than engage in an informative discussion, his doctor got angry. He said: "That's why I decided not to use it. When I explain to her, she get very mad. . .[so] I just told her I would use it, but I never use it." Mestizo Man 1 tried an anti-diabetic prescription but felt the side effects were extreme with significant weight gain and low blood pressure. He started a conversation about it with his doctor, but there was no follow-up. The physician did not inquire for details, nor explain the pharmacological effects, so he left the experience feeling confusion, frustration, and disillusionment with the medical establishment in general. This convinced him he needed to take matters into his own hands, and he began preparing home remedies from locally available plants, thus impacting the trajectory of his T2DM care.
When asked about his interactions with his doctor, East Indian Man 1 expressed deep gratitude for physicians who have worked hard for years to learn about medicine and to share what they learn with others. He went on to say: "I just would like them to keep on advising us. Especially people with diabetes. How to go on living this life." While a couple people expressed negative experiences being scolded by doctors upon disclosure of plant usage, East Indian Woman 1 represented an exception as she found it helpful for open communication when a doctor told her it was alright to take the herbs she was taking, but that she needed to take the pills too.
Discussion
This study identified some of the plants that people are using for T2DM in Belize, their patterns of usage, and some difficulty with patient-provider communication on the subject, indicative of the larger disconnection between the biomedicine and ethnomedicine. A 2021 study on T2DM self-management in Eastern Ethiopia pointed out that COVID-19-related medication shortages bolstered patient preference for herbal medicines, similar to our findings; however, the Ethiopia study discounted any validity to plant medicines [33]. While Letta and colleagues (2021) problematize patient preference of using herbs over pills, and they see the solution to be a sustainable-pandemic circumstances notwithstanding-supply of medications, this study suggests there is also potential in locally accessible plant medicines, especially with more (necessarily culturally safe) research and development. Plant medicines have significance to intangible cultural heritage and to inexpensive community-based care, while pharmaceutical medications are often originally derived from plants, thus discounting them categorically unwarranted [26,[34][35][36][37].
A 2019 international review found cultural safety to be a prerequisite of health equity for Indigenous people, inclusive of access to traditional plant medicines [38]. While the trend over the past thirty years of mandating cultural competency training for healthcare professionals is incrementally helpful, bolstering cultural safety is far more crucial because it goes further than learning about the patient's culture; it addresses the underlying power imbalances that are otherwise continually perpetuated via societal institutions and within provider-patient communication [38]. Interventions that are grounded in patients' culturally-specific understandings of health positively impact T2DM outcomes [39].
This study's findings suggest that if cultural safety around plant medicine usage were developed in Belize, then patients could benefit in several ways: reducing fear of disclosure; improving patient-provider trust and communication; facilitating health literacy and education; and enhancing quality of care and patient satisfaction. Bringing people together from various positions on the spectrum of formal to informal health care could be beneficial in promoting dialogue, deepening understanding, enabling problem-solving, and propelling innovation, research, and development initiatives with the common goals of improving health and care for Belizeans living with T2DM.
Ethnopharmacological literature has begun to reveal that there are hundreds of medicinal plants growing in the forests, pastures, wetlands, mangroves, and other diverse ecosystems of Central and South America with applications for T2DM, its sequelae, and its symptoms [11,12]. Plant medicines have profound cultural and spiritual significance to local populations, and there are many gaps in the literature regarding location-and culture-specific variability of application and knowledge [11,12]. There have been studies in Trinidad and Tobago that found overlap in plant medicine usage for T2DM, namely, regarding the aloe, coco, and papaya plants [36,40]. In a synthesis of 25 meta-analyses of plants that are used around the world and that have been undergone controlled experiments for T2DM medicinal efficacy, those with the largest effects on HbA1c blood glucose tests were aloe vera leaf gel, psyllium fiber, and fenugreek seeds [41]. Numerous plant medicines were found to reduce fasting plasma glucose tests [41]. While no serious negative effects have been found, many plants are still unstudied, in terms of efficacy or otherwise [41] Three of the plants we found to be used for T2DM in Belize, namely, aloe, cinnamon, and ginger have been studied, with aloe showing the most consistently promising results [41]. Our findings contribute to the list of T2DM-relevant plant medicines worthy of further inquiry, as well as an exploration of surrounding issues of communication, trust, and disclosure.
An HIV/AIDS case study in Belize that found that Mayans often felt torn between the traditional Indigenous healing system (inclusive of plant medicine) and biomedicine [13]. While a lot of traditional knowledge has been lost through processes of colonial suppression and the dominance of biomedicine, two systems of medicine can engage areas of mutuality or "windows of compatibility," building on work by Dickinson, 2008 [42]. Our study reaffirmed that there are two medicine systems, that people feel torn between them, and that this tension represents an unnamed barrier to health and health care. As this tension requires patients to maintain the unspoken segregation of systems, it represents yet another burden. People must choose between a culturally safe informal system and a medically established, more resourced system, or else they must hide their participation in one from the practitioners of the other. This phenomenon extended beyond the Mayan to diverse Belizeans. Waldram and Hatala (2015) found that while traditional Indigenous healers welcomed dialogue with biomedicine practitioners, this had never been clearly reciprocated, thus this is what is needed for bridge-building moving forward. Given that practitioners of both systems are working with overlapping patients, it follows that they would have common motivation to improve the state of the relationship and communication between systems. Case studies on intercultural health initiatives in Guatemala, Chile, Colombia, Ecuador, and Suriname defined the required shared principles of mutual respect (e.g., between individual practitioners, systems of medicine) and openness (e.g., to being in relationship, adapting to new learnings) [43]. In integrative scenarios in international settings, bridge building happens when health care providers have trained in cultural safety while traditional healers have coordinated associations to communicate their aims, needs, and standards to the formal health care system [13,39,44]. Cross-cultural medical collaboration could prove an important direction not only for treating T2DM, but also for co-morbidities, including mental health conditions such as anxiety, depression, substance misuse, and post-traumatic stress disorder [45,46]. These type of innovations require resources to develop, they implicate surrounding legalities, and they necessitate thoughtful selection of practice models, role clarity, and appropriate adjoining agreements [43]. In various international contexts, Indigenous-led intercultural health services have demonstrated benefits including improved services and programs uptake, faster remote-setting urgent response, and decreased all-cause mortality [39,44]. Similar interventions have been shown to improve multiple indicators, such as access to prenatal care, remote maternal and infant birth outcomes, childhood vaccination rates, patient trust in providers, patient satisfaction, community and cultural pride, and addiction treatment retention and drug urine tests [44]. Reduced malnutrition, fetal alcohol syndrome incidence, HIV mortality rates, emergency department use, and ER staff turnover have also been resultant of such innovations [44]. Key informants expressed enthusiasm for this direction in Belize, stating that inaccessibility of exported medications and unmet medical needs have become more pressing than ever since COVID-19, thus they merit more inquiry in and of themselves.
Belize has held in preservation many of its forests, flora, and intangible cultural heritage, and thus fosters the protection of plant medicines [11,47]. Eco-tourism was an important industry to Belize, and though it suffered during travel restrictions during the COVID-19 pandemic, many local people were interested in revitalizing and expanding economic, educational, and medicinal opportunities in this industry. While 25% of Belize citizens still do not have access to public health care [2,9], traditional Indigenous healers and other plant medicine practitioners are an important component of the health care context. People continue to turn to those knowledgeable in plant medicines. A large majority of the participants in this study used plant medicines in their diabetes self-management routines.
As women are more likely to experience poverty, stress, and T2DM − while also carrying a disproportionately heavier burden of unpaid caregiving work − than men in Belize [8], more research is needed to understand how gender and chronicity are interacting with plant medicine usage. A limitation of the study was the lack of intersectional analysis regarding how poverty and gender interact with plant usage and health care access. The purposive and snowball sampling did not ensure a statistically representative sample.
More research is needed to understand how specific plant medicines are being used, their efficacies and applications, how these affect T2DM outcomes, relevant guidelines for health care providers, how stakeholders can collaborate in health-promotion efforts, and best practices for community engagement in ethnoecology research in Belize. In the big picture of addressing the rising T2DM prevalence, underlying issues of poverty and inequity need to be addressed, as does access to health care for the underserved and unserved populations. Stakeholders need to implement public health campaigns that connect biomedicine and ethnomedicine to address T2DM via a culturally safe approach. This exploratory qualitative study was a preliminary step in beginning to address these knowledge gaps and provide direction for future research and policy priorities.
Conclusions
This study's research question area was driven by people living with T2DM in Belize, via the impact of applying Grounded Theory methodology which centers to amplifies participants' voices with aims of social justice and health equity [48]. The pervasiveness of plant medicine usage suggests many possible implications for public health and clinical practice, as well directions for ethnomedicinal diabetes research. These directions would require local leadership and community participation to prevent exploitation, appropriation, and cultural harm [46]. Recommendations include improving patient-provider communication, cultural safety in health services, and enhanced partnerships across informal and formal health systems.
|
v3-fos-license
|
2021-12-02T16:09:45.525Z
|
2021-11-28T00:00:00.000
|
244790255
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/18/23/12542/pdf",
"pdf_hash": "ea5486a83eeec2e00b8ca63870e170c7032e2a55",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:758",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"sha1": "4f1a1dacc54f4ce2bbdfd72c835b8c0e132be4e7",
"year": 2021
}
|
pes2o/s2orc
|
Exploring Mental Health during the Initial COVID-19 Lockdown in Mumbai: Serendipity for Some Women
Background: This study explored how low-income women already distressed by reproductive challenges were affected during the initial lockdown conditions of the COVID-19 pandemic in Mumbai, India. Methods: Women with reproductive challenges and living in established slums participated in a longitudinal mixed-methods study comparing their mental health over time, at pre-COVID-19 and at one and four-months into India’s COVID-19 lockdown. Results: Participants (n = 98) who presented with elevated mental health symptoms at baseline had significantly reduced symptoms during the initial lockdown. Improvements were associated with income, socioeconomic status, perceived stress, social support, coping strategies, and life satisfaction. Life satisfaction explained 37% of the variance in mental health change, which was qualitatively linked with greater family time (social support) and less worry about necessities, which were subsidized by the government. Conclusions: As the pandemic continues and government support wanes, original mental health issues are likely to resurface and possibly worsen, if unaddressed. Our research points to the health benefits experienced by the poor in India when basic needs are at least partially met with government assistance. Moreover, our findings point to the critical role of social support for women suffering reproductive challenges, who often grieve alone. Future interventions to serve these women should take this into account.
Introduction
The global COVID-19 pandemic has wreaked havoc on human health, which continues to unfold as the subsequent lockdowns instituted have disrupted day-to-day life, and employment and economic status, with additional devastating consequences to physical and mental health-to which, India is no exception [1]. Cases date back to as early as 30 January 2020, and have increased rapidly and continue to affect daily life in India [2].
In India, the state of Maharashtra has experienced among the highest numbers of COVID-19 cases, with 6.61 million cases reported as of October 2021 (https://www. google.com/search?client=firefox-b-1-d&q=covid-19+maharashtra+tracker, accessed on 27 November 2021). The state also reported 133,000 COVID-related deaths [3] despite strict lockdown measures instituted in mid-March, 2020 [4]. These lockdown measures continued [5] for months with constant uncertainty pertaining to severity and duration [6]. Although the national lockdown ended in May 2020, red zones, or high impact zones, cities such as Mumbai continued with subsequent lockdowns or restrictions [7]. Furthermore, a government-approved study comparing COVID-19 infection rates in slums and non-slum communities in Mumbai found markedly higher rates in the slums (54.1% vs. 16.1% respectively). The density of slum communities (crowded residential areas built of substandard housing and typically lacking infrastructure for clean water, and sewage and waste disposal) renders social distancing nearly impossible and poor availability of facilities for hygiene likely further helps induce this disparity [8].
Like elsewhere in the world, the lockdowns, although designed to prevent the spread of COVID-19, have caused fear of subsequent economic consequences. Some predicted that the lockdowns would trigger the first recession in India in the last 40 years. Devastating consequences of the global pandemic on maternal and child health have also been predicted, due to reductions in essential health services expected to contribute to increased maternal deaths, newborn deaths, and stillbirths [9]. In low-and middle-income country, reductions in maternal and child health services are estimated to range from 9.8 to 39.3% by setting, and hunger-related consequences (due to decreased food availability) are expected to increase between 10 to 50%, and result in increases in mortality and morbidity [10]. In India, with poorer members of society already affected by food insecurity, COVID-19 lockdowns and crop devastation by locusts further compound these problems [11,12].
Women of reproductive age are even more vulnerable due to disruption of contraception and other reproductive care services, resulting in unwanted or unintended pregnancies, sexual abuse, and domestic violence, in addition to gender-based economic discrimination (such as lost wages) during lockdown conditions [13][14][15]. Moreover, child marriages in India have increased during the pandemic [16].
Due to the relentless impact of the pandemic, including the imposed lockdowns put in place by India's government until summer/fall of 2020, experts are increasingly beginning to consider the potential mental health consequences of COVID-19, an issue already discussed as a significant challenge in many affected countries [17][18][19][20][21][22][23]. Complicating this matter further is the dearth of mental health professionals in India, and longstanding stigmatization of mental health [1,24]. In fact, negative attitudes towards mental health professionals in urban India were found to be most prevalent among young adults with lower education and strong religious beliefs-particularly among Hindus and Muslims [25]. Current reports of suicide and increased psychological distress during the pandemic have prompted recommendations for tele-mental health care, social media promotion of preventive measures, and dissemination of reliable information updates pertaining to the virus [1,14,23]. Most expect that the psychological distress present prior to the pandemic will significantly worsen in light of the increasing fear of contracting the virus, in addition to the multiple stressors related to the lockdown [1,19,21]. As a result of the importance placed on women's childbearing in India's traditionally patriarchal, pronatalist society, the loss of status, in addition to grief, puts women who have reproductive challenges (trouble conceiving and/or perinatal loss) at high risk for mental health sequelae, thus making them an especially vulnerable sub-group [26,27].
As part of a larger study exploring maternal mental health in slum-dwelling women with reproductive challenges in Mumbai, we assessed these women's mental health using validated scales just prior to the COVID-19 pandemic. We built on this assessment to monitor our participant's mental health via follow-up phone interviews during the initial COVID-19 lockdown. Using the conceptual framework of the transactional model of stress and coping [28], we explored how this vulnerable group coped in the initial wave of the COVID-19 epidemic. The framework suggests that when stressors (infertility) impact personal goals (such as reproduction), anxiety and distress are likely. When one has little control over the situation, coping efforts are used to try to manage stress. Emotional regulation is the most adaptive coping strategy, but may be thwarted by pressure to meet social expectations, overwhelming obstacles, and lack of social support [29]. However, meaningbased coping, including positive reappraisal, is possible when events (such as subsequent pregnancy) or revised goals (e.g., personal or career growth) may result in emotional well-being, improved functional status, and health behaviors-in a word, resilience [28,30].
Although we expected COVID-related stress to strongly impact these vulnerable women's lives, we also need to note that the Indian government made the decision to try to balance their strict lockdown with a variety of measures to support those expected to shelter in place [31,32]. In the first three months, these included a government contribution to employee wages for those in the formal sector; collateral-free loans (up to INR 200,000) for self-help groups; food subsidies (5 kg of wheat and rice, 1 kg of legumes per household); provision of cooking gas cylinders for women below the poverty line; stipends specifically for senior citizens, widows, and people with disabilities (INR 1000); support for farmers; relief for daily wage workers; and insurance for medical workers [33][34][35][36][37][38][39]. The initial INR 1.7 trillion (USD 22 billion) three-month relief package was followed by additional relief packages [39], although the exact nature was less clearly documented and declined over time.
The purpose of this study was to longitudinally explore how these already distressed (by reproductive challenges), low-income women, were affected by ongoing pandemic conditions up to five months into the COVID-19 lockdown. In line with early studies on COVID-19 demonstrating increased mental health symptomology [23,40] and specifically, increased risk of maternal depression and anxiety [41], we hypothesized similar negative health effects in our sample of vulnerable women. However, although the government provided aid during the early lockdown months, we wanted to also explore how this aid, while meager, may have provided an unexpected temporary security and how this may have affected overall mental health and functioning of our participants, despite the pandemic stress.
Overview
As part of the larger maternal health mixed-method study, we administered baseline surveys as in-person structured interviews (due to limited literacy and research exposure) with 334 slum-dwelling women of reproductive age, to compare women with reproductive challenges to women without such challenges. The ultimate purpose of the original study was to prepare for a wellness intervention for women experiencing high levels of reproductive related distress.
In anticipation of a future intervention, we collected interested women's phone contact information and their consent for us to contact them. Although the COVID-19 pandemic delayed the intervention, in the current sub-study we recruited 98 women from the original group of women with reproductive challenges (stillbirth and perinatal death), who agreed to participate in phone follow-up interviews that focused on life during COVID-19. The first follow-up phone interview was conducted 5 weeks into the lockdown (mid-April 2020), and the second follow-up phone interview occurred approximately five months after the baseline data collection (August 2020, four months into the COVID-19 lockdown). Data was collected by gender-and language-matched, trained local interviewers, teamed up with Accredited Social Health Activists (ASHA) workers who worked with our team to recruit participants. ASHA workers are government supported, trusted women in their communities, who serve the health needs of low-income communities, similar to Community Health Workers (CHWs) in other countries [42]. All interviews were conducted in either Hindi or Marathi, the predominant local languages.
To best describe the complexity of our participant's lives, strengthen our interpretations, and further our understanding of the phenomenon of the women's challenges, we also collected focus group data from the ASHA workers (n = 7) and asked several open-ended questions during the interview.
Qualitative Methods
Using purposive sampling we conducted a qualitative focus group with seven ASHA workers who live and work in the same community (the Turbhe area of Mumbai) as our participants. Focus groups were conducted by a gender-and language-matched trained facilitator. Participants ranged in age from 26 to 45 (mean age 36) and were all Hindu women. We used a semi-structured guide with questions aligned to our theoretical model (social expectations for women of reproductive age, childbearing concerns, and challenges/stress in the community), and audio-recorded, transcribed, and analyzed the resulting transcriptions using standard methods including coding for emergent themes [43]. We also transcribed any answers to the open-ended questions embedded in the follow-up phone interviews and analyzed them using the same methods.
Quantitative Methods
The baseline survey was informed by prior literature and ASHA worker input from the focus groups and included demographic questions and questions about social support, religious coping, coping style, autonomy, mental health, satisfaction with life, perinatal grief, and post-natal depression. Of the original 334 women, we attempted to contact the 118 women who had indicated that they were interested in future interventions. We were able to contact 98 women (83% response rate) who completed the follow-up phone interviews and are part of the current study. In these follow-up interviews we repeated the mental health questions from the baseline interview and also asked about stress, COVID-19 related issues, and resilience. Ethics Committee approval from the Indian authors' institutions and Institutional Review Board (IRB) approval from the US authors' institution was received for the parent study and current study (interviews and focus group), in accordance with the declaration of Helsinki, before commencing data collection. Written informed consent was obtained from participants prior to baseline assessment and verbal consent was obtained prior to follow-up phone interviews. Interview participants were different from focus group participants, but recruited from the same population.
Descriptive Variables
Demographic variables included age, marital status, religion, education, occupation, and socioeconomic status. Additional descriptive variables included general health status and reproductive history. New descriptive variables from the phone interview included Likert-type, multiple answer, and open-ended questions pertaining to COVID-19 in their communities and how it was affecting their lives.
Validated Scales
Hopkins Symptoms Check List-10 (HSCL-10). An Urdu translation of the HSCL-10 has performed well among a sample of poor Pakistanis [44]. A Hindi translation was used with a poor, rural population in India and found to have good reliability, with a Cronbach's alpha of 0.84 [45]. Therefore, it was chosen for use among this population who share some characteristics. The measure consists of 10 items, which are rated on a Likert-type scale ranging from (1) not at all, to (4) extremely, with higher scores representing more symptoms of anxiety and depression. Like Syed et al. [44], we used a mean cut-off score of 1.65 or greater to indicate the presence of notable mental health symptoms (anxiety and depression). The Cronbach's α = 0.87 in this study.
Perceived Stress Scale-4 (PSS-4). The PSS-4 is a self-report measure of participants' global feelings of stress over the last month, and is a shortened version of the Cohen's 14item Perceived Stress Scale [46]. The measure asks participants to rate how often they have experienced stress with response options of 0 (never) to 4 (very often). Before summing the four items, positively worded items are reversed so that higher scores indicate higher stress. The PSS-4 is a reliable measure of perceived stress (Cronbach's α = 0.77) [47] developed for research with community samples [48]. The scale is available in many languages and has been used in a variety of populations [47,[49][50][51]. In the current sample the Cronbach's α = 0.68.
Analytic Quantitative Methods
Descriptive analyses were undertaken to compare our women in the parent study sample (n = 236) with the women who agreed to the phone sub-study presented here (n = 98). We then conducted chi square and t-tests to determine significant differences between these two groups for the variables of interest. Among the 98 women who agreed to participate in the sub-study, 73% (n = 71) participated in a second follow up phone interview four months into the lockdown (5 months post baseline survey) and we used the chi-square or paired t-test to explore differences over time in COVID-related variables (which we did not have at baseline). One-way repeated-measures ANOVA was used to longitudinally explore our mental health (HSCL) variable over time (baseline (before , and at two follow-ups during COVID-19). We created a mental health change variable by calculating change between the baseline HSCL and the first phone follow-up HSCL. To optimize power and for model-building purposes, we bi-variably explored (using Pearson's correlations) the independent variables that were significantly associated with mental health (HSCL-10 change). We then included only these variables in the multivariate analyses.
Education and Resources
Women who participated in the phone follow-ups were offered COVID-19-related education and resources, according to their interest and needs. Educational options included guidelines for social distancing and hygiene, in addition to recommendations to maintain or optimize wellbeing during confinement. Additionally, referral resources were available for any women with a high HSCL score or indications of experiencing serious tension or conflict.
Qualitative Findings
Findings relevant to the current study, summarized by emerging themes from the focus group (FG) discussion and the open-ended follow up questions, helped us understand the significant pressures of the familial expectations on our women in the context of reproductive and work expectations, and how these result in distress. We learned that, although these pressures did not change per se, having the women surrounded by their families during the lockdown (all had to be home, whereas many would usually be out working or seeking work) resulted in unexpected supports that seemed to help the women. This was confirmed by the quantitative results in terms of improved mental health.
The ASHA worker respondents who served as expert consultants about their community of women (n = 7) confirmed that most women in their communities are married by 17-18 years of age, and that they are initially housewives who live with their husbands or in a joint family (husband and his parents and siblings). Financial issues may force a young woman to work outside the home, but if that is the case, she is still expected to maintain her household duties. Participants expressed the social expectation that women begin having children soon after marriage. As stated by one ASHA worker, "She is expected to get pregnant or have a baby compulsorily within a year of marriage", which the rest of the FG unanimously and strongly affirmed. Participants further noted that reproductive expectations do not change even if the woman is working outside the home, which often involves manual labor.
"If women don't get pregnant even after two to three years of waiting, then the husband will marry again for the second time. Both the wives live together with the man." Though this was stated matter-of-factly, the FG participants went on to discuss the social consequences of reproductive challenges such as lower social status, displacement, abuse, divorce, or abandonment. Furthermore, the ASHA workers indicated that the women are blamed for reproductive challenges, whether they are infertility, miscarriage, stillbirth, or infant death. "Mother-in-laws constantly taunt-blame women saying 'you killed the child'..." even if a medical report indicates otherwise. Some families do offer emotional support to the mother immediately after stillbirth, but nevertheless expect the women to get pregnant again right away. In general, the participants said that the women in their community face enormous family pressure to reproduce and ignore family planning advice from medical professionals or ASHA workers. "Some women has five months old child but three months pregnant already because of family pressure". Sometimes this was felt to be because "They are not happy with girl child alone", alluding to son preference. This turned to frank discussion regarding the prevalence of son preference in their communities. As one participant stated "There is family pressure to have baby boy."; and another comparing the loss of female or male babies said "They grieve more for boy. They are more disturbed with still born baby boy compared to baby girl".
The societal expectations for reproduction (and family pressure, which in some cases was noted to include coercion) combined with social consequences (demoted to lower status within the household, blame, etc.) was noted to add to women's distress when experiencing reproductive challenges. The ASHA workers noted distress to include grief, fear, crying, intense distress, withdrawal, isolation, and trauma. Participants discussed the intensity and length of grief as variable, and remedies for reproductive challenges were sought at great financial burden to the family with varying results; however, they concluded that the only true way to resolve the distress was to successfully reproduce. These three themes (reproductive expectations, consequences of reproductive challenges, and distress related to reproductive challenges) are somewhat overlapping and compounding. FG participants identified the mental health sequelae of reproductive challenges as an array of emotional distress and symptomology.
In response to an open-ended question on the phone interviews regarding any changes noted in their relationships during lockdown, women (n = 98) noted feeling better supported by their families with everyone together at home during the lockdown. Participants noted that spending more time together resulted in talking more and caring more for each other, which had a positive effect on their relationships. "Before my husband use to stay at work and used to spend less time with us, but now he spends much more time with us so things are much better." "As we are spending much time together our bond has become more strong." A few participants were or became pregnant during the lockdown and noted that they received extra consideration and caring, making them feel special, "Due to my pregnancy my family members have started loving me more." In addition to meditation, prayer/worship, maintaining a healthy diet, exercise, and using time productively, an open-ended question on the phone interviews asked the women what else they were doing to try to stay healthy during lockdown. Many of the women again focused on their families, noting that doing household work, as a way of caring for their families and staying busy, had a positive influence on them; in other words, household work was actually perceived as self-care. Others noted extra time for sleep, sewing, and engaging in leisure activities, "playing indoor games", all things that few would have been able to do before the lockdown as "hustling" for work or survival funds is an expected part of life for these low-income families.
Participants
Tables 1 and 2 compare our overall group of slum-dwelling female participants with those who agreed to participate in the current sub-study with phone follow-ups. The larger study sample (n = 334) consisted of women 18 to 42 years old, residing in established slums of Mumbai. More than half (56.9%) lived in a joint family context rather than a nuclear family (42.1%), and identified themselves as daughter-in-law or wife (98.3%), having been married an average of 8.22 years (SD 5.82). Most were Hindu (53.4%) women, followed by Muslim (33.9%), Buddhist (9.3%), and other (3.4%) including Christians and Banjaris. Most had low education levels (only 19.5% having higher secondary education or above), and worked as unskilled workers or homemakers (89.4%), the majority had a monthly family income of less than INR 20,714 (approximately USD 274), and considered their families as lower middle class. In general, they deemed themselves as physically and mentally healthy (75.4% and 80.6% indicating no problems, respectively). The current study sample only included (n = 98) women that had indicated the desire to engage in a future self-care intervention based on their own assessment of need, and was compared to the larger study sample, with significant differences noted in terms of occupation, contraceptive use, health, and psychosocial problems. The current sample included more semi-skilled or semi-professional workers; women were more likely to have anemia or other health problems; were more likely to have anxiety, depression, or experience domestic violence; and were more likely to be using some form of contraception. Additionally, the women participating in the phone interviews were significantly more likely to indicate less social support, employ more wishful thinking as a coping style, and have greater autonomy, higher mental distress, and lower satisfaction with life, compared to the larger sample of the parent study (see Table 2). Table 3 describes changes over time due to the COVID-19 lockdown. At the first follow-up phone contact (5 weeks after COVID-19 lockdown began), participants noted multiple sources for COVID-19 information and updates, relying most heavily on news media (80.6%), with fairly high trust regarding the information received (M = 3.62, SD 0.92 on a 1-5 scale). The vast majority indicated that there were no cases in their community. At the second phone follow up (five months after baseline or four months after the initial COVID-19 lockdown), nearly 60% still did not know of any positive cases in their community, although friends and family members were identified as having COVID-19. Most participants lived in one to two room houses in crowded slum neighborhoods. The number of people living in the household (M = 5.64, SD 2.71) indicated tight living quarters, and less than half (42.9%) indicated the ability to maintain social distancing when allowed outside. On a scale of 1-5, most participants were initially quite worried about becoming infected (M = 3.64, SD 1.30), and that had not significantly changed by the time of the second phone interview (M = 3.55, SD 0.88). Lockdown conditions negatively affected monthly family income and food availability, but both improved at the second phone follow up (p < 0.004 and p < 0.001 respectively). Obtaining necessitates was negatively affected but did not significantly change over time. In the first phone interview, most women (74.5%) did not note increased tension/conflict in family relationships.
However, by the second follow-up, significantly more respondents noted increased tension or conflict with their children. Initial positive changes related to spending more time together were steady over time. In response to an open-ended question, positive changes were characterized as creating stronger bonds, having their husband home and interacting with their children more, and enjoying leisure activities together. A few participants were pregnant (7.1%) at the outset of the pandemic, and one participant noted, "Due to my pregnancy my family members have started loving me more". In response to a question about what the women were doing to stay healthy during this time, most noted engaging in prayer/worship, some were engaged in some type of physical activity or exercise, and a few noted the importance of a healthy diet, which had significantly increased between the first and second phone interview. Various other activities engaged in were sewing, cooking, doing handy work, caring for children, housework, playing indoor games, and sleep. We also analyzed change in mental health variables as the lockdown continued. Perceived stress (PSS-4) and mental health symptoms (HSCL-10) remained stable between the first and second follow-up phone interviews, whereas resilience significantly increased (see Table 3).
Paired Sample t-Test of Mental Health
The current sample's HSCL at baseline (pre-COVID) was significantly higher (M = 2.18, SD 0.86) than at the time of the first phone follow-up (M = 1.43, SD 0.46), t (92) = 8.53, p < 0.001, r = 0.66. To further understand the improvements in mental health during the COVID-19 lockdown period, we then carried out further analysis.
One-Way Repeated-Measures ANOVA Exploring Mental Health (HSCL)
To longitudinally assess changes in mental health over five months, we conducted a one-way repeated-measures ANOVA comparing the HSCL scores of participants at three different times: baseline, first phone follow-up, and second phone follow-up. A significant effect was found (F (2132) = 28.24, p < 0.001). Post hoc analysis conducted using protected t-tests revealed that scores decreased significantly from baseline HSCL to first phone follow-up HSCL (M = 0.68, SD 0.86), and although no significant difference existed from first phone follow-up to second phone follow-up (M = −0.45, SD 0.55), a significant decrease in HSCL was maintained from baseline to second phone follow-up (M = 0.53, SD 0.91).
Linear Regression Analysis of Predictors of Mental Health Change
Multivariate analysis included significant bivariates of HSCL change from baseline to first phone interview (perceived stress, family income and class, social support, wishful thinking, and life satisfaction), which explained 42% of the variation in mental health change. Only social support and life satisfaction remained significant, and perceived stress was not statistically significant (p = 0.053). See Table 4 for a summary of the regression analysis. The post hoc achieved power was greater than 0.80; therefore, there is sufficient statistical power to support the analysis results.
Discussion
This study examined changes in mental health among slum-dwelling women in Mumbai during the first four months of India's pandemic lockdown. These vulnerable, hard-to-reach women were already grieving perinatal losses and at high risk for mental health sequelae due to reproductive challenges, and we wanted to explore how the stress of COVID-19 affected them during these early months of the lockdown. Indeed, the women had indicated their interest in a wellness intervention by sharing their phone contacts with us. Because our sample of women were poor (living in slums), we wanted to also explore if the government aid, although meager, may have provided unexpected temporary security.
Qualitative exploration yielded themes including reproductive expectations, consequences of reproductive challenges, and distress related to reproductive challenges. Reproductive expectations included early childbearing and the importance of producing offspring even if working outside the home. The consequences of reproductive challenges were described in terms of family upheaval and change in social status. These findings are consistent with the literature pertaining to India's pronatalist, patriarchal traditions [16,27].
Not only were emotional distress and mental health symptomology following perinatal loss vividly described, but these aligned with prior literature and emerging concerns related to maternal mental health during the pandemic [13,27,41]. These findings also validated our quantitative results, further elaborating the complex challenges our participants experienced. This may in part explain the unexpected finding that women's mental health did not deteriorate during the early COVID lockdown months, as it allowed for their husband's (and other family members') presence as a source of additional social support.
The work undertaken by slum communities usually entails working in the informal sector, including manual labor. These meager opportunities came to an abrupt halt with the lockdowns imposed by the Indian government in an effort to stop the rapid spread of the COVID-19 pandemic; therefore, slum-dwelling families were home together. To offset the economic peril of these draconian measures (which included strict shelter in place orders, with even shopping for food regulated and only allowed on certain days, according to one's residential area), the government offered subsidies and rations. Living in tight quarters, most of the women in our sample admitted they were unable to maintain social distancing and noted significantly more positive cases within their communities over time. Although these poor vulnerable women trusted the pandemic information received, and may have understood the necessity of lockdowns, they found the protections offered lacking, as noted in their responses to the negative effects on their monthly family income, availability of goods, and food. We therefore speculated that these circumstances would increase mental health symptomology among these already at-risk women, which we explored quantitatively.
Surprisingly, compared to the international literature that abounds with examples of poor mental health in persons asked to shelter in place due to COVID-19 prevention measures [17][18][19][20][21][22], women in our study experienced significant mental health improvements; in other words, they actually derived mental health benefits during the lockdown. We had originally planned to deliver a wellness intervention to address our participant's fertilityrelated mental health challenges. However, before the intervention could be offered, the world was struck with the unprecedented COVID-19 pandemic, and a natural experiment continues to unfold amid the pandemic and associated lockdowns. To better understand these unexpected results, we turned to the literature and simultaneously checked our interpretation of results with our community stakeholder partners, the ASHA workers.
For these Mumbai slum-dwelling, low-income women, having the family together and having everyone stay at home provided an opportunity for quality time, deepening connections, and drawing social support from each other. At baseline, their average HSCL score (2.18) was well above the cut-off score of 1.65, and presents a strong argument for the need of our originally planned wellness intervention. However, within the first month of lockdown, the average HSCL score significantly reduced to 1.43-now within the normal range-and remained within the normal range at the time of the second phone follow-up. The relatively large effect size, repeated measures, and congruence of participants' short qualitative responses to the phone interviews increases our confidence that the improved HSCL scores are not an artifact. Further supporting these findings, resilience scores also significantly increased during the lockdown between the first and second phone interviews (M = 21.48, SD 5.33 and M = 23.76, SD 6.08 respectively, p = 0.01).
The women in this study live in established slums, which are close-knit communities that, for many, provide a sense of social support [65]. However, it should be noted that at baseline women in the current sample indicated that they had low social support, which was significantly associated with HSCL in our regression analysis. Indeed, the COVID-19-related lockdown seemed to have ameliorated their perception of low social support. Their positive responses in the phone interviews are aligned with social media research among the general Indian population during lockdown [66]. Prior studies have shown social support to be paramount to the mental health of women dealing with stillbirth, infant death, or infertility [67][68][69]. As the lockdowns continued, our participants noted that they appreciated time with family members (and qualitatively reported that this was particularly true for the presence of their husbands), with nearly as many noting positive changes in relationships as those noting increased tension or conflict. When conflict was noted, it was characterized as not very serious, and did not significantly increase over time (p = 0.359).
Our results provide a snapshot in time about the mental health of these vulnerable women early in the pandemic. Clearly, for low-income women in a society that requires daily struggles to eat and survive, the unexpected subsidies and stipends provided a regular source of support they never previously received, which, in conjunction with having their families near them, mitigated the stress of the pandemic for them-at least in the early months of the pandemic. We do not anticipate these positive effects to continue and do not have the ability for further study with the women. As the COVID-19 pandemic has continued much longer than anyone expected, government subsidies were not enough to meet demand and could not continue indefinitely [70]. There will likely be more mouths to feed due to interrupted family planning services [14] and as lockdown restrictions are progressively eased, there may or may not be jobs to return to [2]. These women and their families are mostly part of the informal work sector comprising 90% of India's working population, and will be disproportionally at risk for this high unemployment, which, without a continuing safety net, will result in dire outcomes, including hunger and loss of housing. As a result, they are progressively prone to resultant health disparities the longer the economic consequences of the pandemic continue [2,71].
It is clear that additional work should be done to explore the long-term mental health consequences for the most vulnerable in India's society. However, as our data covering the first four months of the lockdown show, despite the many negative aspects of the pandemic, our participants reportedly have consistently employed strategies to stay healthy through meditation, prayer or worship, productive use of time, and exercise, pointing to a strong innate resilience that was functional with very little investment by the government. Even eating a healthy diet significantly increased (p = 0.009), perhaps in response to the education provided at the end of the first phone interview, in addition to making use of government rations, which included legumes, rice, and grain [38]. The return to basic staples may have resulted in a healthier diet than that eaten pre-pandemic [72]. Furthermore, employing our educational options to optimize wellbeing in confinement (suggested self-help strategies for physical and mental health) seems to have been effective for them in handling the situation. At the end of the study, five months after baseline measures and four months into lockdown, their perceived stress scores were in the low range and had not increased over time (p = 0.922), initial improvement in mental health symptomology (HSCL) was maintained, and resilience increased. Again, we do not expect these positive trends to continue, especially as the financial stipends decreased when the pandemic lingered for longer than anticipated. However, this natural experiment showed that if slum-dwelling families have even a small amount of governmental support (however modest), which currently does not occur in India, health overall increases despite a stressful life-threatening pandemic.
While reporting results of this novel study, we acknowledge certain limitations that were unavoidable during lockdown conditions. Although participants consented to participate in the phone interviews, given the privacy limitations of being in tight living quarters with family members constantly present, participants may not have reported negative responses, as much due to fear of being overheard and subsequent retribution. There is also a risk of positive response bias, as participants may have underreported their struggles to maintain social distancing, which is nearly impossible in the context of crowded urban slums [73], although our data seems to reflect this reality quite accurately. Fear of stigma and blame around COVID-19 [70] may have also decreased participants' disclosure of known contacts within their families and communities. Fear may be amplified by drones being used to monitor lockdown compliance [70] and the looming threat of legal action [5]. Strengths of the paper include the intentional mixed-methods exploratory sequential design, which helped us contextualize the life experiences of our participants. Furthermore, we believe this resulted in our congruent, although unexpected, findings of improved mental health and resilience across multiple quantitative, well-validated measures, and our qualitative data. Although we cannot speak to how this may have changed further into the pandemic, it does indeed show that financial support matters, especially to the poorest in society. It also suggests that any future intervention (post COVID-19 lockdowns) for these women who suffer from poor mental health because of their reproductive challenges should, at least in part, focus on helping them find social support to address the isolation many experience because of the deeply shameful experience of not being able to bear children.
Given the few mental health research papers published pertaining to the global pandemic, particularly among poor populations in India [66], this paper adds importantly to the literature. The longitudinal design provides a glimpse of unexpected positive change in the midst of the COVID-19 crisis. Follow-up studies and further analyses are needed as the situation and aftermath continue to unfold.
Conclusions
Slum-dwelling women in Mumbai reported experiencing increased social support during the first four months of the national lockdown. For the present time, in combination with initial governmental measures of support during the lockdown, this support is associated with an improvement in the mental health distress they experience due to reproductive challenges. In a traditional, pronatalist context in which women's status is directly affected by producing children-particularly sons-reproductive challenges often result in significant mental health distress, especially as they are often judged by their family members about their reproductive capacity. The finding that positive changes in mental health were found in our participants suggests that, although forced, the increased family cohabitation improved family relationships and social support. Future research should also explore the degree to which the positive changes in mental health noted in the current study may be transient, and particularly to determine if these were solely tied to the unexpected financial support the families received from the government, or if the positive results had more to do with relatively few cases originally noted in the social environment of the women-which surely would have increased exponentially over time.
Once the supportive measures are removed after the lockdown, and their lives are likely strongly impacted by either COVID-19 or the resulting economic downturn, we expect their original mental health challenges will resurface. With health systems already ill equipped to provide mental health services in this context, we are likely to see worsening mental health among those with mental health issues pre-COVID, suggesting the need for community-based non-stigmatizing wellness programs that will aid these vulnerable women to deal with past and emerging stresses.
Author Contributions: Each author made substantial contributions, has approved the submitted version, and has agreed to be personally accountable for their own contributions. L.R.R. contributed to the conception, study design, acquisition of funding, collection and analysis of data, and drafted the work. S.J.R. contributed to the conception, and interpretation of the data. S.S. contributed to the conception, as well as collection and interpretation of the data. S.M. contributed to the conception, study design, analysis of data and drafting. The measures used were available in the public domain and specifically selected having been used in India for prior studies. All authors have read and agreed to the published version of the manuscript.
Funding: Research support received from Research Affairs, Loma Linda University Health (LLUH). The views expressed are those of the author(s) and not necessarily those of LLUH. The funding body had no involvement in the collection, analysis, and interpretation of data and in writing the manuscript.
Institutional Review Board Statement: Institutional Review Board (IRB) approval was received from the first author's institution, Loma Linda University (IRB #5190351) and institutional ethics committee (IEC) approval from the second author's institution, Veer Wajekar Arts, Science & Commerce College (IEC #001/2020).
Informed Consent Statement:
Written informed consent to participate was obtained from all participants. Due to generally low literacy levels and some illiterate participants, the informed consent form was read aloud to each participant with those able to reading along, followed by discussion and answering any questions. Those choosing to participate either signed or marked the informed consent form with their thumbprint, a method commonly used on legal documents in India. A copy of the informed consent form was provided to each participant in their language of choice.
Data Availability Statement:
The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.
|
v3-fos-license
|
2020-03-12T15:04:07.990Z
|
2020-03-12T00:00:00.000
|
212668898
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jcmr-online.biomedcentral.com/track/pdf/10.1186/s12968-020-00610-6",
"pdf_hash": "e1a22b4ad91fb73f5c7d911aaf82754a5a2c6d04",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:759",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "e1a22b4ad91fb73f5c7d911aaf82754a5a2c6d04",
"year": 2020
}
|
pes2o/s2orc
|
Standardized image interpretation and post-processing in cardiovascular magnetic resonance - 2020 update
With mounting data on its accuracy and prognostic value, cardiovascular magnetic resonance (CMR) is becoming an increasingly important diagnostic tool with growing utility in clinical routine. Given its versatility and wide range of quantitative parameters, however, agreement on specific standards for the interpretation and post-processing of CMR studies is required to ensure consistent quality and reproducibility of CMR reports. This document addresses this need by providing consensus recommendations developed by the Task Force for Post-Processing of the Society for Cardiovascular Magnetic Resonance (SCMR). The aim of the Task Force is to recommend requirements and standards for image interpretation and post-processing enabling qualitative and quantitative evaluation of CMR images. Furthermore, pitfalls of CMR image analysis are discussed where appropriate. It is an update of the original recommendations published 2013.
Preamble
Cardiovascular magnetic resonance (CMR) has evolved into a gold standard non-invasive imaging tool in cardiovascular medicine, especially for visualizing and quantifying cardiovascular anatomy, volumes, and function, as well as for myocardial tissue characterization. CMR has unique capabilities in the diagnostic workup of suspected cardiovascular disease. It continues to expand its utility in day-to-day clinical practice. Given its versatility and wide range of quantitative parameters, agreement on specific standards for the image interpretation and post-processing of CMR studies is required to ensure consistent quality and reproducibility of CMR reports. This document addresses this need by updating the 2013 consensus recommendations developed by the Task Force for Post-Processing of the Society for Cardiovascular Magnetic Resonance (SCMR) [1]. The aim of the document is to recommend requirements and standards for image interpretation and post-processing, enabling qualitative and quantitative evaluation of CMR images. Furthermore, pitfalls of CMR image analysis are discussed where appropriate. The Task Force is aware that for some of the recommendations the body of evidence is limited. Thus, this document represents an expert consensus providing guidance based on the best available evidence at present as endorsed by the SCMR. As CMR continues to develop, updated recommendations for image acquisition, interpretation and post-processing will be provided by online appendices when needed and updated Task Force papers.
The recommendations are considered for the application of CMR in clinical routine in adult patients. For some applications, quantification is considered as providing added information but is not mandatory (e.g., perfusion), whereas for others quantification is required for all clinical reports (e.g., T2* assessment in iron overload). In general, the intention of this Task Force is to describe the scenarios in which quantitative analysis should be performed and how it is performed. Quantification itself is a moving target as artificial intelligence approaches to quantification are presently being instituted within CMR analysis software programs and will impact techniques in this arena in the future. The recommendations respect societal recommendations for structured reporting of cardiovascular imaging studies in general (ACCF / ACR / AHA / ASE / ASNC / HRS / NASCI / RSNA / SAIP / SCAI / SCCT / SCMR) [2] and specifically for CMR studies (SCMR) [3]. The recommendations do not supersede clinical judgment regarding the contents of individual interpretation of imaging studies. The Task Force made every effort to avoid conflicts of interest and, where present, to disclose potential conflicts.
General recommendations
The recommendations listed in this section apply to the acquisition and post-processing of all CMR data. CMR studies should be performed for recommended indications. Data acquisition and reporting should conform to the recommendations of SCMR [3,4]. Consistent methods of acquisition and measurements are essential for serial evaluation of changes over time. Standardized structured reports with tables of measurements are helpful for reporting follow-up examinations. Any analysis should be performed using uncompressed or lossless compressed Digital Imaging and Communications in Medicine (DICOM) source images. Factors like type of sequence, spatial resolution, contrast agent and kinetics may influence visual and quantitative analysis and should be considered. Quantitative values should only be provided based on adequate image quality. Since there are no objective criteria for inadequate images, this determination needs to rely on the experience of the reporting physician. Readers should have adequate training and clinical experience that includes normal datasets to avoid over-interpretation of normal variants. The identity and responsibility of the reader should be appropriately documented in the report. Furthermore, the reader of clinical data is also responsible for the use of adequate post-processing hardware and software. The general requirements include: 1. Workstation and screen of adequate specification and resolution (as per the specifications of the postprocessing software) 2. Post-processing software with regulatory approval for use in patients, ideally providing the following tools: a) Full DICOM send/retrieve functionality, network connection with local Picture Archiving and Communication System (PACS) or server solution with compliant patient security properties b) View all short-axis cines as movies in a single display, zoom, pan and change contrast for single images as well as image series c) Perform endocardial and epicardial contour tracings on cines d) Correct for atrioventricular annular location from the long-axis slice onto the most basal left ventricular (LV) short-axis location in contour tracings e) Cross-referencing of structures for confirmation of slice position and anatomy f) Simultaneously view cine, late gadolinium enhancement (LGE) and/or perfusion images from the same location g) Simultaneously view short-and long-axis images of the same region h) Simultaneously view images of the approximate same location on the current and prior study for serial studies i) Perform quantitative signal intensity (SI) and derived analyses j) Perform standardized segmentation of the myocardium according to the segment model of the American Heart Association (AHA) [5] k) Measure flow velocities and flow volumes l) Manually correct or enter heart rate, blood pressure, height, weight, body surface area m) Calculate volumes in stacked or 3D datasets with minimal user interaction, including and excluding trabecular tissue and papillary muscles from the LV volume [6] n) Document important findings in screenshots for the report o) For evaluation of angiography the software ideally provides the following tools: i) 3D multiplanar and maximum intensity projection (MIP) capabilities ii) Volume rendering and surface shaded reconstructions optional for reporting but not mandatory for quantitative analysis iii) Measurement of distances and areas in 3D-MR angiography (MRA) images iv) MIP reconstruction based on nonsubtracted or subtracted 3D-MRA datasets v) Multiplanar reformatting (MPR)
Left ventricular chamber assessment
Visual analysis a) Before analyzing the details, review all cines in cine mode, validate observations from one plane with the others, and check for artifacts, especially in patients with irregular heart rates. b) Dynamic evaluation of global LV function: Interpretation of both ventricular chambers, in concert with extracardiac structures including assessment for hemodynamic interaction between the two chambers (e.g., shunts, evidence of constrictive physiology). c) Assessment of LV function from a global and segmental perspective. Segmental wall motion is based on segmental wall thickening during systole. Wall motion is categorized as: hyperkinetic, normokinetic, hypokinetic, akinetic, dyskinetic. d) In presence of segmental wall motion abnormalities, use of standard LV segmentation nomenclature corresponding to the supplying coronary artery territories is recommended [3,5,7].
Quantitative analysis
a) General recommendations i) In patients with severe arrhythmias, the endsystolic volumes tend to be overestimated and ejection fraction underestimated. In case of significant artifacts this should be denoted in the report. ii) Calculated parameters: LV end-diastolic volume, LV end-systolic volume, LV stroke volume, LV ejection fraction, cardiac output, LV mass, and body-surface area indexed values of all except ejection fraction. The parameters quantified may vary depending on the clinical need. iii) Evaluation of the stack of short axis images with computer-aided analysis packages. iv) Contours of endocardial and epicardial borders at end-diastole and end-systole ( Fig. 1). v) Epicardial borders should be drawn on the middle of the chemical shift artifact line (when present). vi) The LV end-diastolic image should be chosen as the image with the largest LV blood volume. For its identification, the full image stack should be be evaluated and one phase has to be identified as end-diastole for all the short axis locations. In addition, closure of the mitral valve or the phase immediately before opening of the aortic valve may be used for orientation. vii) The LV end-systolic image should be chosen as the image with the smallest LV blood volume. For its identification, the full image stack has to be evaluated and one phase has to be identified as end-systole for all the short axis locations. viii)Deviations may occur and extra care should be taken in the setting of LV dyssynchrony. ix) Automatic contour delineation algorithms must be checked for appropriateness by the reader. b) LV volumes i) Papillary muscles and trabecular tissue are myocardial tissue and thus ideally should be included with the myocardium as part of LV mass. As there is still discussion on the exact delineation of papillary muscles (e.g. versus trabeculation) and not all evaluation tools allow for their inclusion without manual drawing of contours, they are often included in the blood pool volume in clinical practice, which is acceptable. Reference ranges that use the same approach both on the acquisition and postprocessing side must be used. (Fig. 1) [8][9][10]. ii) Outflow tract: The LV outflow tract is included as part of the LV blood volume. When aortic valve cusps are identified on the basal slice(s) the contour is drawn to include the outflow tract to the level of the aortic valve cusps. iii) Basal descent: As a result of systolic motion of the mitral valve toward the apex (basal descent), care must be taken with the one or two most basal slices by using a standardized consistent approach. A slice that contains LV blood volume at end-diastole may include only left atrium (LA) without LV blood volume at end-systole. The LA can be identified by tracking wall thickening (if there is thickening -then it is in the LV cavity) and cavity (shrinking in systole, when in the cavity). Alternatively, the basal slice may be defined by at least 50% of the blood volume surrounded by myocardium. Currently however, there is no expert consensus on which method to use. Some software packages automatically adjust for systolic atrioventricular ring descent using crossreferencing from long axis locations. c) LV mass i) Calculation: difference between the total epicardial volume (sum of epicardial crosssectional areas multiplied by the sum of the slice thickness and interslice gap) minus the total endocardial volume (sum of endocardial crosssectional areas multiplied by the sum of the slice thickness and interslice gap), which is then multiplied by the specific density of myocardium (1.05 g/ml). ii) Papillary muscles: Papillary muscles and trabecular tissue are myocardial tissue and thus ideally should be included with the myocardium as part of LV mass, and this is particularly relevant in diseases with LV hypertrophy [6]. However, readers may decide to exclude trabecular tissue and papillary muscles from the myocardial mass. Reference ranges that use the same approach must be used ( Fig. 1) [8][9][10]. iii) Basal descent and apex: When the most basal slice contains only a small crescent of basal lateral myocardium and no discernable ventricular blood pool, an epicardial contour for the visible myocardium is included for LV mass only. Similarly, when the most apical slice contains only a circle of myocardium without cavitary blood pool, an epicardial contour without an endocardial contour should be drawn for LV mass calculations. d) Rapid quantitative analysis i) A rapid quantitative analysis, known as the arealength method, can be performed using biplanar (e.g. 2-and 4-chamber views) or rotational multiple long axis views. In cases without expected significant regional variation of wall motion, this technique allows for faster evaluation and is not limited by problems related to basal descent. However, the 4-chamber view is strongly influenced by breath-hold position. The accuracy is not similar to short axis coverage, but allows a fast analysis often more similar to transthoracic echocardiography results. When the area-length method is used, with either a single long-axis view or a biplane approach, specific mention of the analysis technique should be made in the report. ii) Calculation [11][12][13]: -Single long-axis equation: LV volume = 0.85 × (LV-area) 2 / LV-length. This is typically performed using a 4-chamber view with calculations of LV volume obtained on both end-diastolic and end-systolic phases. LV area is the planimetered area of the LV cavity from an endocardial contour with the base drawn as a straight line through the medial and lateral aspects of the mitral annulus. LV length is the linear dimension from the midpoint of the mitral annular line to the apical tip of the endocardial contour. Six views provide results that do not differ from short-axis stacks [14]. e) Cavity diameter and LV wall thickness can be obtained similar to echocardiography using two CMR approaches [12,15]: i) Basal short-axis slice: immediately basal to the tips of the papillary muscles. ii) 3-chamber view: in the LV minor axis plane at the mitral chordae level basal to the tips of the papillary muscles. iii) Both approaches have good reproducibility. The 3-chamber view is most comparable to data obtained with echocardiography. iv) For maximal LV wall thickness, the measurement should be made perpendicular to the LV wall to ensure accurate measurements. At the apex, short-axis images are oblique to the axis of the wall and will be inaccurate. In this location in particular, long-axis views should be used. f) Research: i) Real-time cine acquisitions become increasingly available and might be beneficial in patients with arrhythmia or limited breathholding capacity. 3D cine acquisitions are also evolving to accelerate examination time. Post-processing of real-time images and 3D cine acquisitions is still technically evolving. The Task Force chooses to refrain from making a dedicated statement at this time. ii) Quantitative evaluation of LV myocardial dynamics (e.g., strain, rotation, time-to-peak velocity) is feasible by several imaging techniques (e.g., tagging, DENSE, SENC, tissue phase mapping, feature tracking) and requires specific postprocessing software. As research applications are evolving and consensus evidence is being accumulated, the Task Force chooses to refrain from making a dedicated statement at this time.
Right ventricular (RV) chamber assessment
Visual analysis a) Review all cines in cine mode, validate observations from one plane with the others, and check for artifacts and coverage of the right ventricle (RV). b) Assessment of global and regional RV function (septal wall, free wall), where appropriate. Wall motion should be described as hyperkinetic, normokinetic, hypokinetic, akinetic, or dyskinetic. For qualitative regional analysis, wall motion in the RV free wall (e.g., basal, mid, and apical portions), outflow tract and inferior wall may be evaluated as relevant to the specific clinical scenario and diagnosis. c) Assessment of LV and RV chambers for hemodynamic interaction (i.e. constrictive physiology).
Quantitative analysis a) General recommendations i) Calculated parameters: RV end-diastolic volume, RV end-systolic volume, RV ejection fraction, RV stroke volume, cardiac output, and body-surface area indexed values of all except ejection fraction. Similar to the LV, the parameters quantified may vary depending on the clinical need [16]. ii) The contiguous stack of short-axis images or axial cine images is evaluated with computeraided analysis packages ( Fig. 2) [17,18]. Automatically generated contours have to be carefully reviewed. iii) An axial stack of cines covering the RV provides the best identification of the tricuspid valve plane. A short-axis stack of cines is best for delineating the inferior wall. iv) Endocardial borders are contoured at enddiastole and end-systole ( Fig. 2). v) The RV end-diastolic image should be chosen as the image with the largest RV blood volume. For its identification, the full image stack has to be evaluated and one phase has to be identified as end-diastole for all locations. vi) RV end-systolic image should be chosen as the image with the smallest RV blood volume. For its identification, the full image stack has to be evaluated and one phase has to be identified as end-systole for all slices. vii) As for the LV, it may be necessary to review all image slices in the stack to define end-systole. viii)The pulmonary valve may be visualized, and contours are included just up to, but not superior to this level. b) RV volumes i) Total volumes are taken as the sum of volumes from individual 2D slices, accounting for any interslice gap and slice thickness. RV trabeculae and papillary muscles are typically included in RV volumes. c) RV mass is usually not quantified in routine assessment. In selected patients, quantification of RV mass may be considered (e.g., in pulmonary hypertension). d) Confirmation of results i) If no shunts or valvular regurgitation is present, the RV and LV stroke volumes should be nearly equal (small differences are seen as a result of bronchial artery supply and papillary muscle inclusions in the measurements). Since the LV stroke volume is more reliably determined than the RV stroke volume, the LV data can be used to validate RV data.
Post-processing of myocardial perfusion imaging
Visual analysis a) Workflow: i) Display perfusion and corresponding LGE images side-by-side. ii) Adjust window, contrast and brightness level for an optimized contrast within the LV myocardium (not the entire image). The aim of image adjustment is to set a maximal window width without "spilling" of the LV cavity signal into the myocardium. Ensure that myocardium before contrast arrival is nearly black and that the window settings maximize the contrast within the myocardium. Note that that the correct level and window settings requires review of both pre-and peak contrast images. iii) Apply the same contrast, brightness and window settings to all images of the dynamic series. iv) Review series as cines and/or by scrolling through individual images. v) Check that there was an adequate haemodynamic response to stress by reviewing Here, the yellow contours indicate the RV in diastole (c) and systole (d); the RV is contoured following the LV analysis (in c and d, red / green contours indicate endocardial / epicardial borders of the LV) and with reference to the LV the heart rate and blood pressure change between stress and symptomatic response to stress. Images may also be checked for 'splenic switch off' during stress [19]. vi) The key diagnostic feature for identifying a perfusion defect is the arrival and first passage of the contrast bolus through the LV myocardium. vii) Visual analysis is based on a comparison between regions to identify relative hypoperfusion. Comparison should be made between endocardial and epicardial regions, between segments of the same slice and between slices. b) Stress images alone may permit the diagnosis of inducible perfusion defects. When the diagnosis is unclear based on stress images alone and rest images are available, these two image series can be compared. In general, an inducible perfusion defect will be present on the stress, but not the rest images. If perfusion defects are seen on both stress and rest images, they may be artifacts or have other causes such as myocardial scar. Note that artifacts may be less pronounced or absent on rest compared with stress images due to differences in haemodynamics and contrast kinetics between stress and rest. c) Scar tissue may not necessarily cause a perfusion defect, especially if rest perfusion is acquired after stress. Scar should therefore always be identified from LGE and not from perfusion images. d) Criteria for an inducible perfusion defect ( Fig. 3a): i) Occurs first when contrast arrives in LV myocardium. ii) Persists beyond peak myocardial enhancement and for several RR intervals. iii) Is more than two pixels wide. iv) Is usually most prominent in the subendocardial portion of the myocardium. v) Often manifests as a transmural gradient across the wall thickness of the segment involved: most dense in the endocardium and gradually becoming less dense towards the epicardium. vi) Over time, defect regresses from the subepicardium towards the subendocardium. vii) Is present at stress but not at rest. viii)Conforms to the distribution territory of one or more coronary arteries. e) Interpret location and extent of inducible perfusion defect(s) using AHA segment model [5]. i) Comment on transmurality of perfusion defect [20]. ii) Indicate extent of perfusion defect relative to scar on LGE. f) Criteria for dark banding artifacts (Fig. 3b): A common source of false-positive reports are subendocardial dark banding artifacts [21]. These artifacts have the following characteristics: -Are most prominent when contrast arrives in the LV blood pool. -Lead to a reduction in signal compared with baseline myocardial signal whereas a true perfusion defect does not show a decrease in signal compared with baseline. These subtle differences can be hard to appreciate visually. It can therefore be helpful to draw a region of interest (ROI) around the suspected artifact and display its SI-time profile. -Persist only transiently before the peak myocardial contrast enhancement. -Appear predominantly in the phase-encoding direction. Fig. 3 Perfusion imaging. a Perfusion defect in the inferior segments (yellow arrow). Note defect is predominantly subendocardial, affects the perfusion territory of the right coronary artery and is more than one pixel wide. b Dark banding artifact (yellow arrow). Note defect is very dark, occurs already before contrast reaches the myocardium, is seen in the phase encoding direction (right-left in this case), and is approximately one pixel wide. c Positioning of endocardial (red) and epicardial (green) contours and a region-of-interest (ROI) in the LV blood pool (blue) for semiquantitative or quantitative analysis of perfusion data -Are approximately one pixel wide. Dark banding present at stress and at rest with no corresponding scar on LGE images is also indicative of an artifact [22]. Note however that differences in heart rate and baseline contrast can change the appearance and presence of dark banding between stress and rest perfusion images. Thus, absence of dark banding at rest with typical dark banding at stress should not on its own be considered diagnostic for an inducible perfusion defect.
g) Pitfalls of visual analysis i) Multi-vessel disease: Visual analysis is based on relative signal differences within an imaged section of the heart. Theoretically, the presence of balanced multivessel disease can result in most or all of the imaged section appearing hypoperfused, which can lead to false-negative readings and needs to be considered in relevant clinical circumstances. In practice, however, truly balanced ischaemia is rare and a perfusion defect in one or more territories will be more prominent. Even if all coronary territories are affected, the severity of the observed defects typically is more pronounced around the geographic center(s) of the coronary territories. In addition, a clear endocardial to epicardial signal gradient is usually seen in multi-vessel disease [23]. Quantitative analysis of the dynamic perfusion data may be of further help to detect globally reduced myocardial perfusion reserve in multi-vessel disease. ii) Microvascular disease: Diseases that affect the myocardial microvasculature (e.g., diabetes mellitus, systemic hypertension) may lead to a global subendocardial reduction in perfusion [24][25][26][27]. This can lead to false-positive readings relative to angiographic methods and needs to be considered in relevant clinical circumstances.
Features suggesting microvascular disease are the presence of concentric LV hypertrophy and a concentric, often subendocardial perfusion defect crossing coronary territories. Differentiation from multi-vessel disease can be challenging. iii) If vasodilation during stress data acquisition was inadequate, visual analysis may lead to false negative interpretation of the perfusion study [28]. iv) The distance of the myocardium to the surface coil affects signal intensity and may lead to misinterpretation if not considered in the analysis. These problems are less likely if acquisition is corrected for coil sensitivity.
Quantitative analysis a) A quantitative analysis of the SI change in myocardial perfusion CMR studies can be performed. Several methods have been described for this purpose. In clinical practice, these are rarely required, but they may supplement visual analysis for example in suspected multi-vessel disease or suspected inadequate response to vasodilator stress. Fully automated methods for quantitative perfusion analysis are becoming available and may soon become more widely used. Quantitative analysis is also frequently used in research studies. b) Requirements: i) Validation and definition of a normal range with the specific pulse sequence and contrast regime used for data acquisition. If only a comparison between regions of the same study is made, establishing a normal range is less relevant. ii) A temporal resolution of one RR interval is recommended. iii) Consideration of potential saturation effects (higher contrast agent doses are more likely to lead to saturation effects). c) Semi-quantitative analysis: i) Analysis methods that describe characteristics of the SI profile of myocardial perfusion CMR studies without estimating myocardial blood flow are typically referred to as "semiquantitative analysis methods". ii) Workflow: -Select an image from the dynamic series with good contrast between all cardiac compartments (some post-processing tools generate an average image of the series). -Outline LV endocardial and epicardial contours on this image (manual or automated) (Fig. 3c).
-Propagate contours to all other dynamic images. -Correct contour position for in-plane motion (some analysis packages register images prior to contours being outlined). -Depending on the type of analysis to be performed, place a separate ROI in the LV blood pool. Preferably, the basal slice is used. Exclude papillary muscles and flow artifacts from the ROI. -Select a reference point in the LV myocardium for segmentation (usually one of the RV insertion points) [5].
-Segment LV myocardium according to AHA classification [5] -Generate SI / time profiles for myocardial segments +/− LV blood pool. -Consider generating division into endocardial and epicardial layers and repeat analysis [20]. iii) Frequently used semi-quantitative analysis methods (see [29] for detailed review): -Maximal upslope of the myocardial SI profile, may be normalized to LV upslope [30]. -Time to peak SI of the myocardial SI profile [31,32]. -Ratio of stress/rest values for the above (often referred to as "myocardial perfusion reserve index") [33,34]. -The upslope integral (area under the signal intensity-time curve) [35]. iv) Limitations of semi-quantitative analysis methods: -SI may vary according to distance from coil. This can be partially corrected by using a pre-contrast proton density image or other coil sensitivity correction tools. -No absolute measurement of myocardial blood flow given. d) Quantitative analysis i) Analysis methods that process the SI profile of myocardial perfusion CMR studies to derive estimates of myocardial blood flow are typically referred to as "quantitative analysis methods" [29,36,37]. ii) Requirements: -It is a prerequisite for reliable quantification that data acquisition used an appropriate pulse sequence and contrast regime. -The requirements for the acquisition method depend on the analysis method. Currently, this typically requires at least a proton density image, the generation of an input function which is not saturated by using dual bolus [38] or dual contrast [39]. -Motion correction to correct for respiratory motion is preferable. iii) Workflow: -Manual analysis methods require contour placement as described above for semiquantitative analysis. Dynamic SI data are then typically exported to off-line workstations for further processing. -Fully automated methods are becoming available, which generate pixel-wise maps of myocardial perfusion without user input. iv) Several analysis methods have been described, including: -Model-based methods [40,41].
Post-processing of late gadolinium enhancement (LGE) of the left ventricle should not be a single image intensity). ii) Note, on magnitude (not phase-sensitive inversion recovery [PSIR]) images, if normal myocardium has a faint "etched" appearance (darkest at the border with slightly higher image intensity centrally), this signifies an inversion time that was set too short and will lead to underestimation of the true extent of LGE (Fig. 4). In general, an inversion time that is slightly too long is preferred to one that is slightly too short [44]. c) Criteria for presence of LGE.
i) High SI area that is visibly brighter than "nulled" myocardium. ii) Verify regions with LGE in at least one other orthogonal plane and/or in the same plane obtain a second image after changing the direction of readout. d) Assess pattern of LGE i) Coronary artery disease (CAD) type: Should involve the subendocardium and be consistent with a coronary artery perfusion territory. ii) Non-CAD-type: Usually spares the subendocardium and is limited to the mid-wall or epicardium, although non-CAD-type should be considered if subendocardial involvement is global [45]. e) Interpret location and extent using AHA 17-segment model [5] [20]. i) Comparison of LGE images should be made with cine and perfusion images (if the latter are obtained) to correctly categorize ischemia and viability [46]. ii) Estimate average transmural extent of LGE within each segment (0%, 1-25%, 26-50%, 51-75%, 76-100%) [44]. iii) In patients with acute myocardial infarction, include subendocardial and mid-myocardial hypoenhanced no-reflow zones as part of infarct size.
f) Pitfalls i) Bright ghosting artifacts can result from poor electrocardiogram (ECG) gating, poor breathholding, and long T1 species in the imaging plane (e.g., cerebrospinal fluid, pleural effusion, gastric fluid, etc.) [47] ii) On non-PSIR images, tissue with long T1 (regions below the zero-crossing) may appear enhanced [44,48]. iii) Occasionally, it can be difficult to distinguish no reflow zones or mural thrombus from viable myocardium. Imaging using a long-inversion time [49], using PSIR, or performing post-contrast cine imaging may be helpful in this regard. iv) In case of reduced contrast, the interpretation of additional sequences may be necessary (see below section "Dark-blood/grey blood LGE"). v) In PSIR images manual windowing and quantification algorithms may behave differently when compared with magnitude images.
Quantitative analysis
a) Quantitative analysis is primarily performed to measure LGE extent and/or grey-zone extent for research purposes. Subjective visual assessment is still a prerequisite to identify poor nulling, artifacts, noreflow zones, etc., and to draw endocardial and epicardial borders. b) Multiple different methods of delineating LGE extent are described in the literature including the following: manual planimetry, the n-SD technique, and the full width half maximum (FWHM) technique. As the research applications are evolving and consensus evidence is being accumulated, the Task Force chooses to refrain from making a dedicated statement at this time regarding the optimal method for quantitative assessment [50][51][52][53][54][55].
Research tools / quantitative analysis a) Quantification of LGE extent: i) Manual planimetry: -Outline endocardial and epicardial borders.
-Manual planimetry of LGE regions in each slice.
-Summation of LGE areas.
-Multiplication of total LGE area with slice thickness plus interslice gap as well as specific gravity of myocardium provides the approximate LGE mass, which can be used to calculate the ratio of LGE to total myocardial mass. -Considered subjective.
-Adjustment for regions with intermediate signal intensities (grey zones) caused by partial volume can improve reproducibility of measurements [54]. ii) The n-SD technique: -Manual outlining of endocardial and epicardial borders for the myocardial ROI. -Manual selection of a normal "remote" (dark) region ROI within the myocardium to define the reference SI (mean and standard deviation, SD). This subjective approach can affect measurements. -It is susceptible to spatial variations in surface coil sensitivity. -Selection of a threshold between normal myocardium and LGE. The relative SNR of scar tissue versus normal myocardium can LGE image, normal myocardium has a faint "etched" appearance (darkest at the border with higher signal intensity centrally) signifying an inversion time that was set too short and which will lead to underestimation of LGE. On the right panel, the image was repeated with a longer inversion time and demonstrates a larger LGE zone in the inferior wall. For non-PSIR magnitude imaging, always use the longest inversion time possible that still nulls normal myocardium vary dependent on contrast agent type, dose and time after injection, field strength, type of sequence and other variables including the underlying injury itself. As such, there is no cutoff value which works for all situations and usually manual tracing is performed as the standard of truth. But (semi-) automated thresholding may improve reproducibility after adequate standardization. As a starting point for semiautomatic thresholding we recommend 5-SD for infarction. There is currently not enough evidence to provide a cut-off for non-ischemic LGE. -The presence of LGE within the myocardium is then determined automatically. -Requires manual corrections to include noreflow zones and to exclude artifacts and LV blood pool (errors in the endocardial contour). iii) FWHM technique: -Manual outlining of endocardial and epicardial borders for the myocardial ROI. -Uses the full width of the myocardial ROI SI histogram at half the maximal signal within the scar as the threshold between normal myocardium and LGE. -Visual determination whether LGE is present or not, and, if LGE is present, manual selection of a ROI that includes the region of "maximum" signal. This subjective selection can affect measurements. -Is also susceptible to spatial variations in surface coil sensitivity, albeit perhaps less so than the n-SD technique [51]. -Considered more reproducible than the n-SD technique [53]. -Since the technique assumes a bright LGE core, it may be less accurate than the n-SD technique if LGE is patchy or grey [56]. -Requires manual corrections to include noreflow zones and to exclude artifacts and LV blood pool (errors in the endocardial contour). b) Peri-infarct zone: -Multiple methods for quantifying the extent of the peri-infarct or grey zones are reported [57,58]. -The Task Force does not endorse any specific evaluation technique due to the strong impact of partial volume effects. c) Dark-blood/grey blood LGE -Multiple techniques are described in the literature but one that is "flow-independent", (i.e., does not rely on blood flow to suppress blood-pool signal) is preferable [59][60][61].
-As the research application(s) are evolving and consensus evidence is being accumulated, the writing group chooses to refrain from making a dedicated statement at this time regarding the optimal method for quantitative assessment of dark-blood/grey blood LGE images.
d) LGE in other chambers than LV
There is increasing evidence about LGE imaging of the RV, which is usually captured with standard LGE protocols imaging the LV. Imaging the thin LA wall is difficult and requires specialized sequences. As the applications are evolving and consensus evidence is being accumulated, the writing group chooses to refrain from making a dedicated statement at this time regarding the post-processing assessment of LGE in chambers other than the LV.
Background
In 2013, the "T1 Mapping Development Group" published a consensus statement that proposed suitable terminology and specific recommendations for site preparation, scan types, scan planning and acquisition, quality control, visualization and analysis, and technical directions [62]. Building on this initiative, the Consensus Group on Cardiac MR Mapping has formed itself and published in 2017 "Clinical recommendations for CMR mapping of T1, T2, T2* and extracellular volume: A consensus statement by the Society for Cardiovascular Magnetic Resonance (SCMR) endorsed by the European Association for Cardiovascular Imaging (EACVI)" [63]. The following recommendations refer to these consensus statements. For more details regarding when and how to use T1 mapping, refer to this original consensus statement as well as to the SCMR protocol recommendations (Fig. 5).
Visual analysis
a) The visual analysis of the series of differently T1weighted source images should aim to detect and verify diagnostic image quality. b) The visual analysis of the final T1 map should aim to detect artifacts and verify diagnostic image quality. Automatically generated quality control maps (e.g., T1*) may be used to exclude misregistration or significant artifacts. c) Maps may be displayed in color if the pertinent look-up tables are set according to site-specific ranges of normal, or in gray scale in combination with appropriate image processing, to highlight areas of abnormality.
Quantitative analysis
a) For global assessment and diffuse disease, a single ROI should be drawn conservatively in the septum on mid-cavity short-axis maps to reduce the impact of susceptibility artifacts from adjacent tissues. b) In case of artifacts or inconclusive results obtained from mid-cavity ROIs, basal ROIs can be used for validation. c) For focal disease, additional ROIs might be drawn in areas of abnormal appearance on visual inspection. A very small ROI (< 20 pixels) should be avoided. d) The position and size of automatically generated ROIs should be validated. e) Drawing ROIs on greyscale images rather than color maps may reduce bias. f) For assessing diffuse disease, focal fibrosis as assessed by LGE imaging should be excluded from the ROI. g) There is currently no specific recommended / preferred analysis software package. The image reader should be trained with the local standards and with the analysis software package of choice and be familiar with the appearance of artifacts. h) The sensitivity of mapping techniques to confounders such as heart rate and magnetic field inhomogeneities should be considered during interpretation.
i) Extracellular volume (ECV) estimation requires T1 mapping acquisitions before contrast agent administration (native T1) and after contrast agent administration (typically > 10 min post-contrast to approach steady-state conditions). The proposed post-processing steps should be applied equally to both maps. j) For calculating ECV, a ROI in the center of the blood pool in the native and in the post-contrast T1 map should be drawn excluding papillary muscles and trabeculae. k) For calculating ECV, hematocrit of the same day should be available. If this is not available, hematocrit may be estimated from native values of blood pool T1 ("synthetic ECV") [65]. l) ECV is given in %. The formula for calculating ECV: Inline ECV maps can be a useful alternative to manual ECV calculations. The raw images should be checked to verify a diagnostic image quality and processing. n) For clinical reports, the type of pulse sequence, reference range, and type/dose of gadolinium contrast agent (if applied) should be quoted. o) Mapping results should include the numerical absolute value, the Z-score (number of standard deviations by which the result differs from the local normal mean; if available), and the normal range of the CMR system. p) Local results should be benchmarked against published reported ranges, but a local reference range should be primarily used. q) Reference ranges should be generated from data sets that were acquired, processed, and analyzed in the same way as the intended application, with the upper and lower range of normal defined by the mean ± 2 SD of the normal data, respectively. r) Parameter values should only be compared to other parameter values if they are obtained under similar conditions. In other words, the acquisition scheme, field strength, contrast agent and processing approach should be the same, and the results should be reported along with corresponding reference ranges for the given methodology. Post-processing of T2-weighted imaging
Visual analysis
a) The visual analysis of T2-weighted images aims for detecting or excluding regions with significant SI increase, as a marker for an increased free water content (edema). b) Qualitative, visual analysis of myocardial SI may be sufficient for diseases with significant regional injury to the myocardium, such as acute ischemic injury/infarction, acute myocarditis (Fig. 6), stressinduced (Takotsubo) cardiomyopathy, and sarcoidosis. c) Workflow: i) Identify and display appropriate image(s). ii) Modify image contrast and brightness in the myocardial tissue to minimize SI in the most normal appearing myocardium (noise should still be detectable there) and to maximize the maximal SI displayed in the myocardium area without allowing for "over-shining", i.e., displaying non-white pixels as white. iii) Check for artifacts (typically SI changes crossing anatomical structures). d) Criteria for edema: i) Clearly detectable high SI area respecting anatomical borders. ii) Follows an expected regional distribution pattern (transmural, subendocardial, subepicardial, focal). iii) Verifiable in two perpendicular views. e) High SI areas suggestive of myocardial edema should be compared to i) regional function. ii) other tissue pathology such as scar/fibrosis and infiltration. f) Pitfalls of visual analysis: i) Surface coil reception field inhomogeneity: The uneven distribution of the sensitivity of the receiving surface coil may lead to falsely low SI in segments distant to the coil or falsely high SI in segments close to the coil surface, especially in dark-blood triple-inversion recovery spin echo (STIR, TIRM) images. If no efficient SI correction algorithm for balancing the signal intensity across the reception field is available, the body coil, albeit with a lower signal-to-noise ratio, provides a more homogeneous signal reception. ii) Low SI artifacts: Arrhythmia or through-plane motion of myocardium may cause artifacts, making areas appear with falsely low SI, especially in darkblood triple-inversion recovery spin echo images. iii) High SI artifacts: In dark-blood triple-inversion recovery spin echo images, slow flowing blood may lead to insufficient flow suppression and results in high SI of blood, typically along the subendocardial border. This can be confused with myocardial edema.
Semi-quantitative analysis a) Because low SI artifacts can lead to SI distribution patterns that may mimic extensive myocardial edema, a mere visual analysis may lead to incorrect results. SI quantification with reference regions is much less sensitive to these errors and therefore is recommended. b) Requirements: i) Tested normal values for SI values or ratios. c) Workflow i) Global SI analysis: -Outline LV endocardial and epicardial contours. -For the T2 SI ratio, draw the contour for a ROI in a large area of the skeletal muscle closest to the heart and to the center of the reception field of the coil (for short axis views preferably in the M. serratus anterior [66]. ii) Regional SI analysis: -Draw the contour for a ROI in the affected area and divide the SI by that of the skeletal muscle. iii) While a cut-off of 1.9 can be used for dark blood triple-inversion recovery spin echo [67], a locally established value is recommended, Fig. 6 T2-weighted image (short-tau inversion recovery, STIR) in a midventricular short axis view with increased SI in the inferolateral and lateral segments in acute myocarditis because SI and ratio values may vary between sequence settings (especially echo time (TE)) and CMR scanner models. For these images, a color-coded map, based on the parametric calculation and display of myocardial pixels with a SI ratio of 2 or higher, can also be used.
Post-processing of T2 mapping
Background
The Consensus Group on Cardiac MR Mapping published in 2017 "Clinical recommendations for CMR mapping of T1, T2, T2* and extracellular volume: A consensus statement by the Society for Cardiovascular Magnetic Resonance (SCMR) endorsed by the European Association for Cardiovascular Imaging (EACVI)" [63]. The following recommendations refer to this consensus statement. For more details regarding when and how to use T2 mapping, refer to this original consensus statement as well as to the SCMR protocol recommendations.
Visual analysis
a) The visual analysis of the series of differently T2weighted source images should aim for detecting and excluding artifacts and significant motion. b) The visual analysis of the final T2 map should aim for detecting and excluding artifacts. c) Maps may be displayed in color if the color look up tables are set according to site-specific ranges of normal, or in gray scale in combination with appropriate image processing, to highlight areas of abnormality.
Quantitative analysis
a) For global assessment and diffuse disease, a single ROI should be drawn conservatively in the septum on mid-cavity short-axis maps to reduce the impact of susceptibility artifacts from adjacent tissues. b) In case of artifacts or non-conclusive results on midcavity ROIs, basal ROIs can be used for validation. c) For focal disease, additional ROIs might be drawn in areas of abnormal appearance on visual inspection. Very small ROIs (< 20 pixels) should be avoided. d) ROIs should be checked if generated automatically. e) Drawing ROIs on greyscale instead of color maps may avoid bias. f) Depending on the goal of the analysis, focal fibrosis as assessed by LGE imaging may be excluded from the ROI. g) There is currently no specific preferred analysis software package. The image reader should be trained with the local standards and with the analysis software package of choice and be aware of and familiar with the appearance of artifacts. h) Sensitivity of mapping techniques to confounders such as heart rate and magnetic field inhomogeneities should be considered during interpretation. i) Mapping results should include the numerical absolute value, the Z-score (number of standard deviations by which the result differs from the local normal mean), and the normal reference range. j) Parameter values should only be compared to other parameter values if they are obtained under similar conditions. In other words, the acquisition scheme, field strength and processing approach should be the same, and the results should be reported along with corresponding reference ranges for the given methodology.
Post-processing of T2* imaging
Visual analysis T2* imaging always requires a quantitative analysis. Visual analysis is used to ensure adequate image quality, which is the most important factor for the accuracy of data analysis.
Quantitative analysis
a) Evaluation of T2* always requires a quantitative analysis using software with regulatory approval for T2* evaluation in patients. b) Full thickness ROI located in the ventricular septum i) Septal ROI is drawn on mid-LV short-axis image. ii) Take care to avoid blood pool and proximal blood vessels. iii) A septal ROI avoids susceptibility artifact from tissue interfaces. c) Mean myocardial SI from the ROI is plotted against TE (Fig. 7) i) SI falls with increasing TE. ii) Curve fitting should apply a validated algorithm.
iii) The time for the decay of SI falls (shorter T2*) with increasing iron burden. iv) In heavily iron overloaded patients, SI for higher TEs may fall below background noise causing the curve to plateau and underestimating T2*. v) This can be compensated for by: -Truncating the curve by removing later echo times (Fig. 7e) [68,69]. -This issue is not significant when using the double inversion recovery (black blood) sequence [70]. d) Cut-off values at 1.5 Tesla: i) Normal cardiac T2* is 40 ms [71] ii) T2* < 20 ms indicates cardiac iron overload [72] iii) T2* < 10 ms indicates increased risk of development of heart failure [73] e) CMR assessment of T2* at 3 T for assessment of iron overload cardiomyopathy cannot be recommended at this time. T2* shortens with increasing field strength making assessment of severe iron overload more problematic, and there is a lack of clinical verification.
Flow image interpretation and post-processing
Background CMR flow imaging provides information about blood flow velocities and volumes, and enables the visualization of blood flow. Flow assessment in a 2D slice is in widespread use. Recently, temporally resolved flow evaluation in a 3D volume (4D flow) has evolved enormously. It is currently predominantly used for evaluating congenital heart disease. For further details regarding application, acquisition and postprocessing of 4D flow also refer to the corresponding consensus document [74].
Visual analysis a) Appropriately aligned acquisitions of cines and stacks of cines can give valuable information on flow in relation to adjacent structures, notably on the directions, time courses and approximate dimensions of jets resulting from valve regurgitation, stenoses or shunts. Such information can be important in assessing the credibility of measurements of flow, which may be subject to several possible sources of error. Gradient echo cines differ somewhat from balanced steady state free precession (bSSFP) in terms of degrees of signal augmentation or reduction attributable to flow effects. Of note, bSSFP can provide clear delineation between the relatively bright signal from voxels aligned within the coherent core of a jet, and low signal from the shear layer that bounds such a jet core. In-or through-plane phase contrast flow velocity acquisitions can also provide visual information on the directions, dimensions and time courses of flow; it can also image morphology, which can yield a clue to the etiology of an abnormal jet [75,76]. It is also often used in congenital heart disease.
Color flow mapping in post-processing software may be useful in determining directionality of the jet or morphology. b) Pitfalls: i) Flow appearances on both cine and phase encoded acquisitions are highly dependent on image location and orientation, especially in the case of jet flow. ii) Check for the appropriate velocity encoding. If the range of velocity encoding (VENC) is set too high, visualization of the jet may not be obtained and may be inaccurate as well as having poorer SNR. If it is set too low, a mosaic pattern on the images will be visualized [77]. iii) If slice thickness is too large on in-or throughplane velocity mapping, the higher velocities will be "averaged out" with the lower velocities and stationary tissue; jets and flow may not be visualized correctly. iv) If the annulus of valves is very dynamic or the imaging plane is not set correctly, the valve morphology may not be visualized. v) If imaging in the presence of metal containing devices, signal loss may be present as artifact and interpretation must proceed with caution. vi) Check for appropriate spatial and temporal resolution. For spatial resolution, 8 to 16 pixels should fill the vessel to obtain accurate results on through-plane velocity mapping. For temporal resolution, there should be at least 11-16 frames per cardiac cycle [78].
Quantitative analysis a) Workflow: i) Through-plane measurements may be supplemented by in-plane measures if needed. ii) Review phase and magnitude images side by side. Window the magnitude and phase images to the appropriate brightness and contrast so that the borders of the ROI are sharp. iii) Examine the images to ensure the quality is sufficient and that the VENC was not exceeded, or there is little contrast (i.e., the VENC was too high). iv) Trace the borders of the vessel of interest on each phase and magnitude image so that only the cavity of the vessel is included (Fig. 8); make sure the noise outside the vessel is not included. Check that this is performed correctly on the magnitude images always keeping in mind that it is the phase images that contain the encoded information. v) Baseline-correction may be considered. As the utility and exact methods are not yet established, the writing group chooses to refrain from making a dedicated statement at this time regarding its use. vi) Directly calculated parameters include antegrade and retrograde volume, flow rate, peak and mean velocity. pulmonary artery flow or the sum of caval return from the forward flow across the aortic valve in the absence of significant aortic to pulmonary collateral flow (noting that this will be a slight overestimate as bronchial flow is5 % if total aortic output) [79]. -Estimation of cardiac shunts is feasible by calculating Qp/Qs based on the stroke volume obtained by flow measurements in the pulmonary artery and at the aortic sinutubular junction. Shunts can also be quantified by direct measurement of the flow through the shunt. b) Pitfalls: i) On the phase images, the area of flow may be slightly larger than the area of the magnitude images. Care has to be taken when evaluating the magnitude imagesthe size of the ROI has to be adapted. ii) If the VENC is exceeded, some software packages allow for adjusting the "dynamic range" of the velocity scale so that the VENC is not exceeded. For example, if the peak velocity in the aorta is 175 cm/s and the VENC was set at 150 cm/s, the dynamic range is between − 150 cm/s and + 150 cm/s (i.e., 300 cm/s). This may be moved to − 100 cm/s and + 200 cm/s to account for this accelerated velocity. This will be demonstrated on the graph of the velocity where the phase in which the VENC is exceeded does not "alias" (appears to go the wrong way) after correction. iii) In general, the area that exceeds the VENC in the ROI is in the center of the vessel and not at the edges; if it is at the edges, it is usually (but not always) outside the vessel. iv) If imaging in the presence of devices, signal loss may be present as artifact and interpretation must proceed with caution [80]. v) When measuring peak velocity, some software packages will determine the peak velocity in one pixel in the ROI whereas others may take the peak velocity of the average of a few adjacent pixels in the ROI. By reporting the peak velocity in a single pixel, noise may make this measurement inaccurate. By reporting this as an average of a few adjacent pixels, noise is less of an issue, however, the true peak velocity may be higher than the reported value. These factors must be kept in mind and interpretation may need to be adapted to the measurement technique used. vi) When attempting to measure peak velocity using through-plane velocity mapping along a vessel, interpretation should be tempered by the notion that this parameter may be an underestimate as the true peak velocity lies somewhere along the vena contracta; the through-plane velocity map may not have been obtained at the level of the true peak velocity. If the vena contracta is itself narrow or ill defined, jet velocity mapping is unlikely to be possible. vii) Peak velocity is only minimally affected by small background phase offsets, while volume measurements can be dramatically affected by even a small background phase offset due to the cumulative aspect of integration overspace (within the ROI) and time (over the cardiac cycle). Dilatation of a vessel tends to increase error of this type [81]. viii)Orientation of the image plane perpendicular to flow direction can have a significant impact on peak velocity measurement, while not significantly affecting volume flow [78]. ix) Internal consistency may be used to partially assess the accuracy of measurement (e.g., the sum of the flows in the branch pulmonary arteries should sum to the flow in the main pulmonary artery, and comparing the stroke volume obtained by flow measurement with the stroke volume obtained by volumetry of cine images).
Research tools a) Real time velocity mapping: The utility and postprocessing algorithm best applied to this approach is the subject of ongoing research.
Post-processing of angiography of thoracic aorta, pulmonary arteries and veins Visual analysis a) MIP for first review of 3D data and for demonstration purposes (Fig. 9). Volume rendered (VR) techniques may be used for demonstration purposes, but not for quantitative analysis. b) Aorta [82,83]: i) Wall thickness: Review bSSFP or turbo spin echo images. Avoid measurement in areas with artifacts that may distort anatomy, such as chemical shift artifacts. ii) Wall irregularities: Review 3D-MRA source images and bSSFP or turbo spin echo. c) Pulmonary arteries [84]: i) Multiplanar double oblique and targeted MIP reconstructions for assessment of wall adherent thrombi, emboli, wall irregularities and abrupt diameter changes. d) Pulmonary veins [85]: i) Assess for atypical insertion, small accessory veins and ostial stenoses. e) Coronary arteries: i) Coronary MRA (either contrast-enhanced or non-contrast MRA using 3D whole heart bSSFP) can play a role in assessment of congenital anomalies [86], but not usually in the context of ischemic heart disease. The origins, branching patterns, and course of coronary Quantitative analysis a) Aorta: i) Diameters of the aorta are measured on double oblique MPR of source images perpendicular to the vessel centerline at standardized levels ( Fig. 10) [87]. In oval shaped vessels the longest diameter and its perpendicular diameter shall be reported. Both, inner (lumen) or outer (external vessel wall) diameter may be measured. This should be included in the report, as well as the type of angiography (with or without contrastenhancement). Measurement of outer contour is recommended in dilation such as in aneurysms, while the inner contour is recommended in the setting of stenosis, such as in coarctation. ii) In the presence of wall thickening (e.g. thrombus or intramural hematoma) inner and outer diameter including vessel wall thickness should be reported. iii) Aortic root measurements require ECG-gated images. Diameter of the sinus portion should be recorded as the maximum sinus to sinus measurement perpendicular to the vessel centerline. For more details and normal values refer to [82]. iv) Standardized structured reports with tables of diameters are helpful for reporting follow-up examinations. b) Pulmonary artery: i) Diameters are measured on double oblique images perpendicular to the centerlines of the pulmonary trunk as well as right and left pulmonary arteries. It should be reported whether the inner or outer contour was measured. In oval shaped vessels the longest diameter and its perpendicular diameter shall be reported, with measurement during systole recommended. Alternatively, cross-sectional area may be measured. For normal values refer to [84]. c) Pulmonary veins: i) Double oblique MPR of pulmonary veins perpendicular to centerline for diameter measurements. For a more comprehensive assessment including flow measurements refer to [85]. (6), mid-descending aorta (7), diaphragm (8), abdominal aorta above celiac trunk (9). (Adapted from [87])
|
v3-fos-license
|
2019-09-10T02:04:40.324Z
|
2019-01-01T00:00:00.000
|
202095873
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/37/e3sconf_clima2019_01046.pdf",
"pdf_hash": "001ba63eabf2828e1f06d594732b4026b2504c1f",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:760",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "74409887d22ce5fe06bef22a7cefc7a16039bc56",
"year": 2019
}
|
pes2o/s2orc
|
Aircraft passenger comfort evaluation: sitting and standing passengers in commercial cabin
This research investigates the evaluation of passenger comfort during a cruise airplane trip. The flow fields in the comfort design in commercial aircraft create contour conditions for the diffusers, cabinets and geometry cabin, responsible for providing a healthy environment to the passengers. The objective of this work is to characterize the airflow by measuring the velocity field and the air temperature inside the cabin. Based on the actual data, a computational fluid dynamics (CFD) analysis was performed using the Autodesk programming language for the simulations, in order to obtain information about the possible standards of easy and seated comfort for the passengers. The results of particle dispersion in the cockpit showed great influence of the ventilation system and the location in the aircraft where people generate the particles. Based on these results, the internal layout of the BWB2 airplane, also known as “Flying Wing” was projected. The projected cabin furniture features ventilation in order to attend the passenger’s need in flight. It is noted that the incipient individualization of the passenger’s thermal comfort configures one of the biggest problems faced by the airlines. Consequently, It is a possible differential for competition between airlines.
Introduction
This article presents a comparative study of thermal comfort by means of computational fluid dynamics (CFD) simulations in an empty aeronautical cabin and thermal manikin seated and standing. Thus, it can provide detailed information on thermal comfort that it is impossible to obtain in experimental research. We discuss several issues of CFD studies with the empty environment and with thermal manikins (CTMs) with the actual dimensions of a person. The simulations are results of the air temperature of the boundary conditions of the fluid and solid properties of the interior of the aircraft cabin. The results of the CFD simulations will provide reliable parameters of thermal comfort for the design of the cabin layout of the commercial passenger aircraft BWB2, also known as "flying wing", thus respecting cabin geometry. The model of this aircraft was developed by H.D.C.Muñoz. Therefore this research is justified due to the air quality and its impact on the human during the cruise that determine elements of the aeronautical comfort design and exert strong influences on the thermal conditions of the passenger. In addition, the importance of designing the internal environment of the airplane cabin and establishing specific norms and standards for such an environment and its particularities became the subject of discussion among several researchers and the requirement of passengers, travel agencies and airlines that competitiveness among airlines. Based on these parameters, the design of the BWB2 "Flying Wing" was designed with ventilation and air temperature customized to meet the requirements of thermal comfort of the cabin of the aircraft with respect to the movement of the particular air and the comfort of the passenger during the flight.
Objective
The objective is to perform a CFD thermal analysis to compare numerical results in an empty aircraft cabin with a thermal manikin sitting in an armchair and standing in the aero cabin. The overall thermal result will allow reliable parameters to be obtained to design the personalities of the BWB2 aircraft, respecting the cabin geometry in order to guarantee the passengers' thermal comfort.
Input Data
In order to perform the simulations, the Autodesk-CFD computational fluid dynamics package was used, with mesh generation tools (preprocessing), solution of the discretized conservation (solver) and post-processing equations, where it was possible to be calculated the current lines, profiles and animations. The data used in the CFD simulation of the commercial aircraft cockpit of the e-170 aircraft will be described in Figures 1 and 2.
Empty cabin
Figure 5 -11. Represents the air supply diffuser using a computational mesh for fluid domain mesh demonstration. Figure 6 shows the empty cabin airspeed. The graphs in Figure 10 and 11 are generated by the software to display the result of air temperature and velocity of the empty cabin. In the absence of the human being the first observation is that the lower air intake does not seem to influence or influence very little the temperature of the air near the ground, not even in the immediate surroundings of the entrance. The behavior of the temperature field in Figure 7 shows the variation in the region in which passengers would be seated up to 2 ° C. This asymmetry, with the lowest temperature in the head region and the heated feet, provides discomfort in the space users.
Front view of the cabin with sitting mannequin
The CFD simulations performed with the seated manikin are presented in Figures 12 -16. For these results, the lower air intake allows the passenger near the window to be exposed to a more balanced scenario, a fact that does not occur with the passenger in the corridor.
Front view of the cabin with standing manikin
The CFD simulations performed with the seated manikin are presented in Figures 17 -22. In the cockpit with the passenger standing: The variation reaches 2.5 ° C between head and foot temperature. This difference represents discomfort. We enquire two questions: the individual functions as an obstacle, right? Does it emit heat from your body? This is a position, say, sporadic, since it happens when the passenger gets up, but soon after he should sit down.
Maybe flight attendants are more exposed to this scenario. The regions of wind shade, as expected, have higher temperatures. Nevertheless, the presence of the (non-existent) division in the middle of the cabin seems to force the change of air direction creating a different dynamic from what actually occurs. Is it possible to simulate the entire cabin?
Conclusion
The results of this work present a model based on computational simulation in CFD with parameters of air temperature and velocity distribution in real cabin environment of empty commercial aircraft, with passenger sitting and standing. The importance of researching the standing passenger due to the fact that during the flight the passengers rise and circulate in the cabin and the flight attendants remain standing distributing food for the passengers. It is verified that the result of these simulation in CFD present different profiles of air temperature and velocity for each form of occupancy of the empty cabin or with passengers having a new variable, the human breathing. Factors that result in different particle distribution patterns in the cabin. Therefore, the ceiling ventilation system promotes a greater dispersion of particles throughout the cabin, due to the characteristics of the mixture of this type of ventilation. It is observed that because of the characteristics of the air mixture of this type of ventilation there is a greater concentration of particles in the cabin in the ventilation system, by indicating a lower efficiency in the removal of air from the air cabin. It concludes with the development of the research the dispersion of air particles receives great influence of the ventilation system used and the point of injection of particles. These factors became key parameters for developing the custom ventilation design for the commercial passenger aircraft BWB2, known as the "Flying Wing". It is important to study the environmental comfort, to verify the real thermal behavior of the user of commercial aircraft. In view of the need to investigate air diffusers, responsible for passenger discomfort and the transmission of diseases such as the so-called SARS, and the challenges that are imposed on the intentions of designing a healthy and comfortable environment in the cabins. Based on research on personified ventilation (PV) and floor air inflation (UFAD), the furniture design of the commercial wing aircraft was developed, which are presented in Figure 23 through Figure 27, especially considering the point view of the passenger and emphasizing above all the concepts of human thermal sensation. The layout of the ergonomic armchair is shown in Figure 25. The kitchen design is shown in Figure 26 and the bathroom design is in Figure 27.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2013-11-21T00:00:00.000
|
38222143
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://compcytogen.pensoft.net/lib/ajax_srv/article_elements_srv.php?action=download_pdf&item_id=1800",
"pdf_hash": "45494927ac5fbc6a762df3490eda68f5274b6a78",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:763",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "45494927ac5fbc6a762df3490eda68f5274b6a78",
"year": 2013
}
|
pes2o/s2orc
|
Bibliography of studies on hybrid zones of the common shrew chromosome races distributed in Russia
Abstract The common shrew, Sorex araneus Linnaeus, 1758, has become a model species for cytogenetic and evolutionary studies after discovery of extraordinary Robertsonian polymorphism at the within-species level. Development of differential staining techniques (Q-, R-and G-banding) made it possible to identify the chromosomal arms and their combination in racial karyotypes. Entering into contact with each other, the chromosomal races might form hybrid zones which represent a great interest for understanding of the process of speciation. Until recently all known hybrid zones of S. araneus were localized in Western Europe and only one was identified in Siberia (Russia) between Novosibirsk and Tomsk races (Aniskin and Lukianova 1989, Searle and Wójcik 1998, Polyakov et al. 2011). However, a rapidly growing number of reports on discovery of interracial hybrid zones of Sorex araneus in the European part of Russia and neighboring territories appeared lately. The aim of the present work is to compile the bibliography of all studies covering this topic regardless of the original language and the publishing source which hopefully could make research data more accessible to international scientists. It could also be a productive way to save current history of Sorex araneus researches in full context of the ISACC (International Sorex araneus Cytogenetics Committee) heritage (Searle et al. 2007, Zima 2008).
Introduction
The common shrew, Sorex araneus Linnaeus, 1758, displays exceptional variability of karyotype derived from intraspecific chromosome rearrangements of the Robertsonian type. Metacentric pairs of S. araneus are formed by fusion of originally acrocentric chromosomes at their centromeres in different combinations of arms. As a result, the chromosomes number (2n) varies from 20 to 33, the odd number is due to the presence of karyotype of the Robertsonian heterozygote with one metacentric and two acrocentrics, instead of two homozygous metacentrics or four acrocentrics. At the same time the fundamental number of chromosome arms (FN) remains unchanged and is equal to 40. As far as this process takes place within populations, we could talk about Robertsonian polymorphism which occurs in the vast range of S. araneus species.
After the pioneer analysis in Western Europe in the 1950s and 1960s, the studies of Robertsonian polymorphism in S. araneus populations started in Russia, widening the area of cytogenetic investigations to include European and Asian parts of the former USSR (Orlov 1974). The observed variations in chromosome arm lengths led to conclusion that Robertsonian fusions might involve different arms in different populations, which resulted in widely varying non-homologous metacentrics (Orlov and Kozlovsky 1969, Ford and Hamerton 1970, Hausser et al. 1985. Introduction of new methods of chromosome identification (Q-, R-and G-banding) improved the karyotype definition and increased the interest in the common shrew chromosome evolution. The International Sorex araneus Cytogenetics Committee, ISACC was founded at Oxford University in 1987 and until recently international meetings were held every 3 years. The results of its activity were summarized in 2007 by Searle et al. Based on chromosome specific G-banding patterns, Searle et al. (1991) established the standard nomenclature for chromosomes of S. araneus. Later rules for differentiation of the intrapopulation variants (polymorphism) from the interpopulation ones (polytypy) as well as from individual karyotype forms were developed (Hausser et al. 1994). Chromosome identification made it possible to describe the chromosomal races of S. araneus (Halkka et al. 1974(Halkka et al. , 1987. Results of karyological studies over the full species range were successively summarized first by Zima et al. (1996) and then by Wójcik et al. (2003). In Russia G-banded chromosomes of the common shrew were first described for a Siberian (Novosibirsk) population by Král and Radjabli in 1974. Results of further studies of high resolution G-banding and chromosome painting of race Novosibirsk represented the species in the international "Atlas of Mammalian Chromosomes" (2006) and in comprehensive comparative studies of Sorex (Biltueva et al. 2011).This race was also used for DAPI karyotyping of the common shrew (Minina et al. 2007).
Currently, no less than 72 chromosomal races are recognized in total (White et al. 2010). The number of Russian chromosomal races has already reached 25 (Orlov et al. 1996, Bulatova et al. 2000, Shchipanov et al. 2009, Pavlova 2010. Only four of these races are common for Russia and some neighboring areas. They include the following: 1) the Neroosa race which spreads over the southern regions of Russia and Ukraine; 2) the West Dvina race which can be found in Russia -Belarus neighboring regions; 3) the Goldap race which inhabits the Baltic coast area of Poland and Kaliningrad region of western Russia; 4) the Ilomantsi race which occurs in the bordering areas of north-western Russia (Karelia) and Finland (Orlov et al. 1996 . Due to ISACC activity, research that involves detection of the hybrid zones, as well as discovery and description of the chromosome races continues on a regular basis. The first case of S. araneus interracial hybridization in Russia was presented by Aniskin and Lukianova (1989) for Tomsk and Novosibirsk races in Western Siberia. This hybrid zone is characterized by the high number of the chromosome arm combinations and remains one of the most complex and best studied S. araneus hybrid zones Wójcik 1998, Polyakov et al. 2011). The hybrids here form a complex meiotic configuration, a long chain of 9 monobrachially homologous acrocentrics and metacentrics. Presumably, chromosome incompatibility proved by meiosis data may induce infertility in hybrids which, in turn, could contribute to promotion of the selection for assortative mating (Searle and Wójcik 1998). Given that racial karyotypes of S. araneus as a rule differ by 1-5 variable metacentrics, the hybrids should produce rings or chains of different numbers and length in meiosis. Thus, the simplest heterozygotes form the chain of three, CIII, or ring of four, RIV. The most complex heterozygote was registered in Moscow and Seliger races hybrids in European Russia, and represents the chain of eleven, CXI (Bulatova et al. 2007). As far as the meiotic complications may lead to reduced hybrid reproductive fitness, the incompatibility is to be considered as the first stage in reproductive isolation. There are indications that the Robertsonian rearrangements do not interrupt the existent gene flow in hybrid zones and could not promote speciation in S. araneus. Instead, races might be merely remnants of past allopatric differentiation followed by the loss of secondary contact (Horn et al. 2012, Polly et al. 2013), presenting in particular astonishing racial 'patchwork'.
As has been shown in a variety of recent studies, the number and diversity of the chromosome rearrangements along with the relative variety of hybrid zone types represent a great opportunity both for understanding of the aftereffects and possible connections of chromosome mutations with the morphological, ecological and genetic differentiation in wild populations of common shrews (see Bibliographic list). It seems quite appropriate to recall the forecast made the British cytogeneticists CE Ford and JL Hamerton in 1970 (p. 235): "… shrews displayed multiple patterns of chromosome variation predicting the problems essential for the interpretation of species evolution. Information about hybrid meiosis would be of outstanding value and studies of pregnant females and their embryos from polymorphic populations could give important information about the breeding system and relative fertility. At a more modest level there remain many parts of Europe from which simple identification of the karyotype in samples from the local population could at least help to fill in the still rather fragmentary distribution map of Races A and B and might reveal further unsuspected chromosome variation". Till now only the second part of this task has been mostly accomplished, while our knowledge of the influence of chromosome rearrangements on cells, specimen and species is still too fragmentary.
The first tribute to the bibliography on the S. araneus cytogenetic model was paid by Prof. Jan Zima at the 8th ISACC meeting (2008). To support his idea, we compiled the bibliographical list which includes majority if not all of currently available papers devoted to interracial hybrid zones of S. araneus in Russia. The Bibliographic list presented here includes 43 full papers published in national and international scientific editions within the last 40 years. As it shown by the published data, hybrid karyotypes and true hybrid zones were reported for at least 14 out of 25 chromosome races (which are indexed below) of the common shrew that inhabit Russia. This index includes the names of the races and their standard abbreviations, karyotypic diagnosis and F1 hybrids meiotic formula followed by the reference number of the relevant papers from our Bibliographic list.
|
v3-fos-license
|
2021-05-05T00:08:28.218Z
|
2021-03-23T00:00:00.000
|
233670753
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.researchsquare.com/article/rs-299265/v1.pdf?c=1631894365000",
"pdf_hash": "18a53915239cdce9c2d72bef60381f28b62132ed",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:765",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"sha1": "2dee1afde5953bf3ae95a7ae2d5e50839e855e39",
"year": 2021
}
|
pes2o/s2orc
|
Synthesis, Characterization and Thermal Behavior of HYP2O7·3H2O Electrical Properties of HYP2O7
The synthesis of the diphosphate HYP 2 O 7 ·3H 2 O was made via soft chemistry route from evaporation of aqueous solution at room temperature. The obtained compound, was characterized by means of X-ray diffraction (XRD) and infrared spectroscopy (IR). The results showed a high purity phase. IR spectrum of this diphosphate revealed usual signals related to P 2 O 7 diphosphate group and water molecules. The thermal decomposition of the synthesized product by DTA / TG proceeded through four stages leading to the formation of the Y 2 P 4 O 13 as a nal product. On the other hand, its decomposition by CRTA took place in three stages leading to the formation of the anhydrous diphosphate HYP 2 O 7 as a nal product. X-ray powder diffraction and infrared spectroscopy were used to identify these materials. Furthermore the electrical properties of the HYP 2 O 7 were investigated through impedance complex analysis. Modest conductivity has been observed in this material at relatively medium temperature range. Activation energy of 0.67 and 1.44 eV, was deduced from the corresponding Arrhenius plot. The optical band gap of the title compound is calculated and found to be 2.71 eV.
Introduction
Much scienti c disciplines are now concerned with phosphor compounds and / or ionic conductors.
Several types of host materials for rare earths are studied: oxides, sulphates and phosphates. Their applications are many and varied.
In this context, rare earth phosphates have been the subject of much research in order to investigate their electrical and optical properties. In the lighting eld, uorescent lamps have been manufactured using lanthanum phosphate doped with Ce 3+ and Tb 3+ ions (LaPO 4 : Ce 3+ , Tb 3+ ) [1]. The compound CsPrP 4 O 12 was used in scintillators [2]. Glassy phosphates are also used as laser materials such as NaPO 3 , Al(PO 3 ) 3 doped with Nd 3+ [3][4] ions, Y(PO 3 ) 3 doped with Yb 3+ ions [5]. They are also used in medicine, as optical tracers or in the treatment of cancer by target molecule [6].
The present work is part of the search for new multifunctional rare earth phosphates with electrical and optical properties of interest for industrial applications.
In this paper, we describe synthesis, characterization spectroscopic properties and thermal decomposition of HYP 2 O 7 ·3H 2 O. The anhydrous product HYP 2 O 7 was also prepared and investigated by impedance complex analysis.
X-ray powder diffraction Powder X-ray diffraction pattern was recorded, at room temperature, in the 2θ range of 10-60° by Panalytical X'Pert PRO MPD diffractometer.
Spectroscopic technics
The functional vibrations groups are examined through Fourier Transform Infrared spectral analysis at room temperature in the range 400-4000 cm −1 using the NICOLET IR 200 FT-IR infrared spectrometer. The optical absorption is studied at room temperature with a Perkin Elmer Lambda 11 UV/Vis spectrophotometer in the range of 200-400 nm. The experiments by controlled rate of thermal analysis (CRTA) were carried out with 50 mg samples weighed into a fused silica cell which was placed into a refrigerated furnace constructed in house and operating in the -30 -600°C temperature range. Once the equilibrium temperature was reached, the pressure above the sample was lowered using vacuum pumping system from 1 bar to 5•10 -3 mbar. During the CRTA experiment, where the decomposition leaded to the production of vapor, the vapor pressure was measured by a Pirani gauge placed in proximity of the sample. The pressure signal produced by the Pirani gauge was sent to the furnace heating controller. The heating of the sample then took place in such a way as to keep constant at a preset value the vapor pressure generated by the sample. The use of a diaphragm, placed between the Pirani gauge and the vacuum system, permitted an increase of the residual pressure (5 mbar) above the sample without changing the rate of vapor elimination.
Impedance Spectroscopy
Electrical conductivity measurements were performed with a Hewlett-Packard 4192 A impedance analyzer.
Results And Discussion X ray Diffraction
The X-ray patterns of the obtained crystals are identical to those of the acidic gadolinium diphosphate trihydrate HGdP 2 O 7 ·3H 2 O type II [7][8][9]. So the prepared compound is identi ed as HYP 2 O 7 ·3H 2 O isostructural with the last compound.
Its cell parameters were calculated on the basis of its powder diffractogram starting from the cell parameters of HGdP 2 O 7 ·3H 2 O. The indexed diffractogram and the cell parameters obtained are given in Table 1.
IR Absorption Spectroscopy
The infrared spectrum of diphosphate HYP 2 O 7 ·3H 2 O is shown in Fig.1 In the TG curve the weight loss can be divided into four areas: 27-91 °C, 91-204 °C, 204-485°C, and 485-796°C. The TG weight loss in the rst stage (6.6 %) would correspond to the removal of two water mlecules (%th = 11.32 %).It is related to the endothermic peak at 79 °C. The second one occurs between 91 and 204 °C and is accompanied by an endothermic peak at 112 °C. The corresponding water loss 9.9 % is close to the theoretical value calculated for the loss of two crystallization water molecules. The third and the fourth stages would correspond to the departure of 0.5 water molecule per formula unit (%exp =2.2 %, %th = 2.83). They are accompanied by a large thermal effect.
So, the total weight loss in 27-796 °C temperature range (19.9 %) would correspond to the loss of the three crystallization water molecules and of the half constitution water molecule, which is in agreement with the calculated value 19. 81 %.
The product obtained at the end of the thermolysis has a complex IR spectrum (Fig.3). It particularly shows a wide band between 960 and 1320 cm -1 and two bands of low intensity between 800 and 750 cm -1 . The analysis of its powder X-ray diffractogram (Fig.4) shows that it is a well crystallized product.
So we can say that the thermal effect appearing between 685 and 900°C is an exothermal one peaking at 802°C and corresponding to the crystallisation of the decomposition product. According to the decomposition equation the nal product would be of stoichiometry (Y 2 O 3 ,2P 2 O 5 ):
2[HYP 2 O 7 ·3H 2 O] (s) → 7 H 2 O(g) + (Y 2 O 3 ,2P 2 O 5 ) (s)
The comparison of the obtained patterns with those found in the literature showed that they have no correspondence in the database. So the decomposition product may correspond to a new salt of formula Y 2 P 4 O 13 . It should be noted that in a previous work [10] we have studied the thermal decomposition of HGdP 2 O 7 . 2H 2 O. NH 3 and we have reported the formation of a gadolinium tetraphosphate Gd 2 P 4 O 13 identi ed by X-ray diffraction (00-035-0078 JCPDS le). This salt has been so far reported as equilibrium phase in the gadolinium phosphate system Gd 2 O 3 -P 2 O 5 and shown to be a de ned congruent fusion compound [11]. It seems that the gadolinium tetraphosphate and the new obtained one are not isostructural.
To better specify the in uence of water vapor on the decomposition stages of ytterium acid diphosphate trihydrate, we undertook this study using thermal analysis at controlled transformation rate (CRTA). The IR spectrum and the X-ray patterns of the CRTA residue are shown, respectively, in Figs. 6 and 7.
The IR spectrum of the obtained product (Fig.6) shows the persistence of the characteristic bands of the diphosphate anion and the decrease of the O-H intensity band.
The correspending X-ray patterns (Fig.7) are found to be identical to those given of the anhydrous diphosphate HGdP 2 O 7 [13].
So, we can conclude that the vapor phase existing under the sample during its decomposition is constituted only by water vapor. This permit to a rm that the sample decomposition took place at a constant rate. In this condition, the weight loss at such step is proportional to the time. Thus, the rst and the second steps would be related to the removal of two and half water molecules respectively.
The IR spectra of the products isolated at 98 and 330°C (Fig.9) show both characteristic bands of the diphosphate group and those of the crystallization water. The corresponding X-ray Patterns (Fig.8) show the formation of well crystallized diphosphates. According to the CRTA results, these diphosphates would be the monohydrate and the hemihydrate respectively. It seems that their structures are similar to those of anhydrous salt considering the similarity between the corresponding RX patterns and those of the anhydrous product.
The comparison of our results with those found for HGdP 2 O 7 ·3H 2 O [12] show that the decomposition scheme of the two salts are different in spite of their isotypy. Indeed, it was found that the rst decomposition step in HGdP 2 O 7 ·3H 2 O by CRTA under 5 mbar water vapor correspond to the removal of only one water molecule. This rst water molecule leaved the salt without changing its structural arrangement because it was loosely bounded [7]. A dehydrate isostructural with the initial trihydrate salt was then obtained [13]. This difference between the two CRTA results shows that the water molecules of crystallization are bound differently in the two diphosphate crystal lattices, despite being isostructural.
Optical study
The UV-Vis electronic spectrum of the studied compound, HYP 2 O 7 ·3H 2 O are reported respectively in Figs.
10.a-b. Fig.10.a illustrating the absorption spectrum show three absorption bands: an intense one observed at 290 nm and two other large ones with maxima located respectively at 470 and 531 nm.
Consequently, the bands observed on the absorption spectrum of this compound show an energy transfer between Y-Y or Y-O.
The band gap between HOMO (the ability to provide electron) and LUMO orbital (the ability to accept electron) was determined using Tauc method [14] (Fig.10b). The value determined is 2.71 eV. As well as, it has the character of a semiconductor with a wide band gap suggesting applications in optoelectronics.
Electrical properties
The electrical properties of HYP 2 O 7 were investigated through impedance complex analysis.
In Fig.11 are shown some Nyquist plots of the anhydrous compound HYP 2 O 7 at different temperatures. The Arrhenius diagram is illustrated in gure 12. It is formed by two linear curves with a meeting point located at Tr = 874 K. Such a break is generally due to a crystal structure transition. However, an X-ray diffraction study carried out on a sample calcined at a temperature slightly above the breaking point temperature (Tr), shows that no transition of crystal structure has taken place. Furthermore, no thermal accident was observed on the differential thermal analysis curve of HYP 2 O 7 between 773 and 923 K. The activation energy was determined in the two intervals. We give in Table II the average activation energy value for T<Tr and T>Tr as well as the conductance value of this anhydrous phosphate.
|
v3-fos-license
|
2018-04-03T00:39:29.889Z
|
2016-03-28T00:00:00.000
|
13522831
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://zookeys.pensoft.net/article/7767/download/pdf/",
"pdf_hash": "b19d8df1820f08f307be8f8ba59acb5d70c8cf9d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:766",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "b19d8df1820f08f307be8f8ba59acb5d70c8cf9d",
"year": 2016
}
|
pes2o/s2orc
|
A new species of Hypoaspis Canestrini (Acari, Mesostigmata, Laelapidae) associated with Oryctes sp. (Coleoptera, Scarabaeidae) in Iran
Abstract A new species of the genus Hypoaspis Canestrini, Hypoaspis surenai sp. n., is described based on adult female specimens collected in association with Oryctes sp. (Coleoptera: Scarabaeidae) in Taft, Yazd province, Iran.
Introduction
The mite family Laelapidae includes approximately 800 species of morphologically, ecologically and behaviourally very diverse dermanyssoid mites, including obligate and facultative parasites of vertebrates, insect paraphages, and free-living predators that inhabit soil-litter habitats and the nests of vertebrates and arthropods (Evans and Till 1966;Faraji and Halliday 2009;Lindquist et al. 2009;Joharchi et al. 2012a, b). Currently, the family is classified into approximately 144 genera, including Hypoaspis with 36 species. treated Hypoaspis sensu stricto as a separate genus equivalent to Hypoaspis (Hypoaspis) of other authors (e.g., Evans and Till 1966;Karg 1979Karg , 1982Karg , 1993, and gave a diagnosis and comparison of diagnostic characters for the closely related genus Coleolaelaps Berlese. That concept of Hypoaspis s.s. is followed here. The most recent taxonomic work on the genus was by Joharchi et al. (2014), who clarified the diagnosis of the genus and reviewed species that occur in the Western Palaearctic Region. In Iran, Hypoaspis s.s. included 14 identified species prior to this study Razavi Susan et al. 2014;Joharchi et al. 2014).
The ecological role of this genus is unknown. They may feed on exudates from the beetle's body or their eggs, or on other small invertebrates in the microhabitats created by the beetles (Costa 1971;Joharchi et al. 2014). This has not been established experimentally, and it will be necessary to do feeding experiments to establish the true ecological role of these mites. The purpose of this paper is to describe another species of Hypoaspis s.s. to increase our knowledge of the Iranian fauna of Laelapidae.
Materials and methods
Phoretic laelapids on beetles were collected from Taft, Yazd province, Iran, in 2015. Mites were removed from the beetles using an entomological pin. Specimens were cleared in Nesbitt's solution and mounted in Hoyer's medium . The line drawings and examination of the specimens were performed with an Olympus BX51 phase contrast microscope equipped with a drawing tube and figures were elaborated with Corel X-draw software, based on the scanned line drawings. Dorsal shield length and width were taken from the anterior to posterior margins along the midline, and at its broadest point, respectively. Length and width of the sternal shield were measured from the anterior border to the posterior margin at the full length and broadest point, respectively. Genital shield length and width were measured along the midline from the anterior border of the genital shield to the posterior margin of the genital shield, and at the maximum, respectively. Leg lengths were measured from base of the coxa to the apex of the tarsus, excluding the pre-tarsus. The nomenclature used for the dorsal idiosomal chaetotaxy is that of Lindquist and Evans (1965), the leg chaetotaxy is that of Evans (1963a), the palp chaetotaxy is that of Evans (1963b), and names of other anatomical structures mostly follow Evans and Till (1979). We use the terms "lyrifissures" to refer to slit-shaped sensilli, "gland pores" to refer to structures that we believe are the openings of secretory pores, and "poroids" for circular or oval- -20150304-1k, ARS-20150304-1l) are also in the Australian National Insect Collection, CSIRO, Canberra, Australia (ANIC). All measurements in the descriptions are given in micrometres (μm).
The short diagnosis below is summarised from the detailed diagnosis in Joharchi and Halliday (2011).
Short diagnosis. Dorsal shield oval, without lateral incisions, bearing 35-40 pairs of setae including one or more pairs of Zx setae; some opisthonotal setae greatly elongated, especially Z4 (at least three times as long as J4); post-anal seta distinctly shorter than para-anals; hypostomal setae h3 distinctly longer than other hypostomal setae; tarsus II with two subterminal blunt spines (setae al1 and pl1).
Males & immature. Unknown. Etymology. The species is named in memory of Surena (died 53 BC) was a Parthian spahbed ("General" or "Commander") during the 1 st century BC.
Remarks. According to the key to species of Hypoaspis s.s. occurring in the Western Palaearctic Region provided by Joharchi et al. (2014), Hypoaspis surenai most resembles H. pentodoni Costa, 1971 but has the following unique character states for the genus: 21 pairs of long smooth, pointed setae on the podonotal shield, including a supernumerary pair near s6 (x) and r2, r3, r6 off the shield; 16 pairs of smooth and long setae on the opisthonotal shield including two pairs of Zx setae between the J and Z setae, seta Z3 absent; three long macrosetae on tarsus IV (ad2, pd2 and pd3); one macroseta on each of femora II-IV and seta ad1 on genu IV being only slightly longer than the remaining setae on the segment.
Almost all of the species of Hypoaspis s.s. occurring in Iran are associated with Coleoptera, especially with a wide variety of species in the family Scarabaeidae, while a few have been collected in soil. Most of these species have been collected on only a few occasions, so it is difficult to draw any firm conclusions about their host specificity. The question of host or microhabitat specificity of the species cannot be analysed in detail until all of the available collections are re-examined to confirm the identifications.
|
v3-fos-license
|
2016-03-14T22:51:50.573Z
|
2010-11-24T00:00:00.000
|
14154791
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/286/5/3805.full.pdf",
"pdf_hash": "5d8acdeb778f21c1eadc1c335f3feb17b6f56522",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:767",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"sha1": "ff73c0d485690565efd9d40798ebf916734856ab",
"year": 2010
}
|
pes2o/s2orc
|
Arrestin-2 Differentially Regulates PAR4 and ADP Receptor Signaling in Platelets*
Arrestins can facilitate desensitization or signaling by G protein-coupled receptors (GPCR) in many cells, but their roles in platelets remain uncharacterized. Because of recent reports that arrestins can serve as scaffolds to recruit phosphatidylinositol-3 kinases (PI3K)s to GPCRs, we sought to determine whether arrestins regulate PI3K-dependent Akt signaling in platelets, with consequences for thrombosis. Co-immunoprecipitation experiments demonstrate that arrestin-2 associates with p85 PI3Kα/β subunits in thrombin-stimulated platelets, but not resting cells. The association is inhibited by inhibitors of P2Y12 and Src family kinases (SFKs). The function of arrestin-2 in platelets is agonist-specific, as PAR4-dependent Akt phosphorylation and fibrinogen binding were reduced in arrestin-2 knock-out platelets compared with WT controls, but ADP-stimulated signaling to Akt and fibrinogen binding were unaffected. ADP receptors regulate arrestin recruitment to PAR4, because co-immunoprecipitates of arrestin-2 with PAR4 are disrupted by inhibitors of P2Y1 or P2Y12. P2Y1 may regulate arrestin-2 recruitment to PAR4 through protein kinase C (PKC) activation, whereas P2Y12 directly interacts with PAR4 and therefore, may help to recruit arrestin-2 to PAR4. Finally, arrestin2−/− mice are less sensitive to ferric chloride-induced thrombosis than WT mice, suggesting that arrestin-2 can regulate thrombus formation in vivo. In conclusion, arrestin-2 regulates PAR4-dependent signaling pathways, but not responses to ADP alone, and contributes to thrombus formation in vivo.
Ser-Thr kinase, Akt (4,5). In fibroblasts, colorectal, and gastric carcinoma cells, arrestins have been found to play a critical role in localizing PI3K to GPCR complexes through an interaction with Src family kinases (SFKs) (6 -8). Perhaps most relevant for platelet agonists, thrombin-stimulated Akt phosphorylation involved activation of both G i and G q : G i -dependent signaling to Akt required ras activation, while G q -dependent Akt activation required arrestin-2 (9).
Previous work from our laboratory and others has demonstrated that Akt-dependent pathways contribute to platelet activation by G protein-coupled receptors (10,11). Yet, the mechanisms leading to Akt activation in platelets remain incompletely defined. Multiple laboratories have demonstrated that thrombin-dependent Akt phosphorylation in platelets is reduced by about 90% in the presence of inhibitors for the G i -coupled ADP receptor, P2Y12, and is blocked by inhibitors of PKC (12,13). These data have been interpreted to mean that Akt activation by thrombin is wholly dependent on the PKC-stimulated release of ADP. Yet, the amount of Akt phosphorylation induced by ADP reaches only a fraction of the magnitude of that induced by thrombin. In other words, P2Y12 activation is necessary, but not sufficient, to achieve maximal Akt stimulation by thrombin or PAR4 agonist. Studies to evaluate the contribution of specific G protein ␣-subunits to thrombin versus ADP-dependent signaling in mouse platelets provided data consistent with this view: specifically, while G q was required for Akt phoshorylation induced by thrombin or ADP, G i2 was required solely for ADP signaling (10). These results suggested that a secondary role of PAR4 activation was required that was not induced by ADP alone. Furthermore, a recent study shows that PAR4 is capable of stimulating Akt phosphorylation in P2Y12 knock-out platelets (14). Taken together, these results suggest that the mechanisms of Akt activation induced by thrombin receptors versus P2Y12 are different, but synergistic.
Because studies in fibroblasts suggest that Akt phosphorylation depends in part on the ability of arrestin-2 to form complexes with PI3Ks (9), we evaluated the formation of arres-tin2-PI3K complexes in thrombin-stimulated human platelets. Results from immunoprecipitation experiments suggest that arrestin-2 facilitates the recruitment of signaling complexes containing PI3K subunits and the SFK Lyn to the PAR4 receptor for thrombin. To determine whether arrestin-2 is important for Akt activation, Akt phosphorylation induced by PAR4 agonists or ADP was assessed in arrestin-2 knock-out (Ϫ/Ϫ) versus wild type (WT) mouse platelets. The functional responses of platelets from arrestin-2 Ϫ/Ϫ mice were also tested in vitro. The results show that Akt phosphor-ylation stimulated by PAR4 agonist is arrestin-2-dependent, whereas ADP-dependent Akt phosphorylation is not. Fibrinogen binding induced by PAR4 agonists is also arrestin-dependent, while ADP-induced fibrinogen binding is not. The role of arrestin-2 in supporting platelet signaling by PAR4 appears to contribute to platelet function in vivo, because arrestin-2 knock-out mice have a mild defect in thrombus formation following carotid artery injury in vivo.
Animals-Arrestin-2 knock-out (Ϫ/Ϫ) mice were generated as described (15) and kindly provided by the laboratory of Dr. Robert Lefkowitz. All animal procedures were approved by the Institutional Animal Care and Use Committee at Thomas Jefferson University.
Platelet Isolation and Preparation of Human Blood-Blood for biochemical studies of human platelets was collected by venipuncture from adult human volunteers after providing written informed consent as approved by the Institutional Review Board at Thomas Jefferson University. Blood was collected into a 60-cc syringe containing ACD (trisodium citrate, 65 mM; citric acid, 70 mM; dextrose, 100 mM; pH 4.4) at a ratio of 1:6 parts ACD/blood. Anticoagulated blood was spun by centrifugation at 250 ϫ g to remove red cells. Platelets from the resulting platelet rich plasma (PRP) were pelleted at 750 ϫ g (10 mins), washed once in HEN buffer (10 mM HEPES, pH 6.5, 1 mM EDTA, 150 mM NaCl) containing 0.05 units/ml apyrase and resuspended with HEPES-Tyrode's buffer (137 mM NaCl, 20 mM HEPES, 5.6 mM glucose, 1 g/liter BSA, 1 mM MgCl 2 , 2.7 mM KCl, 3.3 mM NaH 2 PO 4 ) at a concentration of from 4 -10 ϫ 10 8 platelets/ml in HEPES-Tyrode's buffer containing 0.05 units/ml apyrase, for immunoblotting, immunoprecipitation, or fibrinogen binding.
Platelet Isolation from Mice-Blood was isolated from the inferior vena cava of anesthetized mice (100 mg/kg pentobarbital) using a syringe containing 150 units/ml heparin (1:9 dilution with blood), diluted 50% with HEPES-Tyrode's buffer, and spun at 250 ϫ g for 4 min to remove red cells. Generally, blood from two mice of each genotype was used for experiments. Platelets from the resulting platelet-rich plasma (PRP) were pelleted at 750 ϫ g (10 min), washed once in HEN buffer, and resuspended with HEPES-Tyrode's buffer. Platelets were counted on a Coulter counter (Beckman-Coulter Z1) and the final platelet count adjusted with Tyrode's buffer.
Immunoblotting-Samples (4 ϫ 10 8 platelets/ml) were treated with antagonist for 10 min at room temperature. Agonist was added in a 2 l volume to 100 l platelets per sample; platelets were incubated for 0 -10 min at 37°C and were lysed by addition of 5ϫ Laemmli buffer containing a mixture of protease inhibitors (Sigma-Aldrich). Lysates were resolved on 10% SDS-PAGE and immunoblotted with an antibody to Akt phospho-Ser-473 (Cell Signaling Technology, Beverly, MA), arrestin-2, or arrestin-3 (Santa Cruz Biotechnology) at a 1:1000 dilution, then anti-rabbit alexafluor680 (LiCor) or anti-Goat AlexaFluor680 (LiCor) in blotting buffer (LiCor) in TBS and exposed on a LiCor fluorescence imager.
Immunoprecipitation (IP)-Samples (8 -10 ϫ 10 8 platelets/ ml) were treated with antagonist for 10 min at room temperature. Agonist was added in a 5 l volume to 500 l platelets per sample; platelets were incubated for 0 -10 min at 37°C and were lysed by addition of 2xIP buffer (1% Nonidet P-40, 150 mM NaCl, 10 mM Tris, 1 mM Na 3 V0 4 , 5 mM EDTA, 0.5 mM PMSF, pH 7.4) containing a mixture of protease inhibitors (Sigma-Aldrich); rotated at 4°C for 30 min and spun 30 min at 12,000 ϫ g. Antibodies or control IgG were added to lysates (2 g/per samples) and rotated at 4°C for 3 h or overnight followed by protein A/G-agarose 15 l/ml at 4°C for 2 h. Samples were washed with 1ϫ IP buffer three times and applied in Laemmli buffer to 10% SDS-PAGE for immunoblotting.
Megakaryocyte Differentiation-Megakaryocytes were differentiated from mouse embryonic stem (ES) cells in culture, essentially as described by Eto et al. (16). Mouse ES cells were seeded onto confluent OP9 cells and cultured in MEM medium supplemented with 20% fetal bovine serum (FBS). In 5 days, the ES cells were differentiated into hematopoietic progenitors without formation of embryonic bodies. For differentiation into megakaryocytes, the cells were trypsinized on day 5 and passed over fresh mitomycin C-treated OP9 cells in the same culture medium containing 20 ng/ml TPO. Then, on day 8, cells were seeded in a fresh OP9 feeder layer in the same culture medium containing 10 ng/ml TPO, IL-6 10 ng/ ml, and IL-11 10 ng/ml for harvest at day 12. Differentiation was evaluated by immunostaining, Wright-Giemsa staining, and flow cytometry.
Immunofluorescence-Mouse ES cells were grown on Fluorodishes as described above. Cells were fixed with 4% paraformadehyde and washed with PBS, incubated with primary antibody for 1ϳ2 h, and incubated with rhodamine or fluorescein-conjugated secondary antibodies for 30ϳ60 min. Stained cells were observed on an Olympus confocal microscope (40ϫ).
FeCl3-induced Carotid Artery Thrombosis-The right carotid artery of an anesthetized adult mouse (6 -10 weeks of age, 18 -30 kg treated with 100 mg/kg pentobarbital) was ex-posed to a strip of filter paper saturated with either 10% FeCl3 for 2 min 15 s or 5% FeCl3 for 3 min, then rinsed with PBS, essentially as described (10). Arterial flow rate was recorded for 30 min with a Doppler flow probe. Stable occlusive thrombi were scored as complete cessation of blood flow which remained for the 30 min duration of the assay. Thrombi were scored as unstable if flow resumed before the end of the 30 min time period or decreased by at least 80% from the initial flow rate, but remained incomplete. The animal was scored as having no occlusive thrombus if the flow rate never decreased by 80% of the initial flow rate during the term of the assay. The mice were sacrificed at the end of the procedure. Statistical significance was calculated using Fisher's test of exact probability.
Arrestin-2 Forms Agonist-dependent Complexes with PI3K
and Lyn in Human Platelets-Given that the Ser-Thr kinases Akt1 and Akt2 have been shown to play important roles in platelet aggregation and thrombosis (10,11), we sought to uncover additional signaling proteins that may regulate Akt activation in platelets and also play important roles in thrombus formation. We and others have previously shown that PAR4-dependent activation of Akt is dependent on activation of SFKs (13,17). SFKs are incorporated into signaling complexes containing PI3K subunits and arrestins in other cells (6); therefore, we reasoned that arrestins may contribute to Akt activation in platelets. We show in Fig. 1A that arrestin-2 is present in platelets isolated from mice and humans and immunodetection of arrestin-2 expression is lost in platelets genetically deleted for arrestin-2 (arrestin-2 Ϫ/Ϫ ). To determine whether thrombin stimulates the association of arrestin-2 with the p85 subunit of PI3K␣/, platelets were stimulated with thrombin, lysed, and immunoprecipitated with an antibody recognizing p85 PI3Ks ␣ or . Immunoprecipitates were then immunoblotted for arrestin-2. Formation of signaling complexes containing p85-PI3K and arrestin-2 were stimulated by thrombin and inhibited in the presence of the SFK inhibitor PP2, or ARL66096, an inhibitor of the P2Y12 receptor for ADP (Fig. 1B). Complex formation was also detected in thrombin-stimulated platelets immunoprecipitated with antibodies to arrestin-2 and immunoblotted for p85-PI3K, and blocked by apyrase, an enzyme which hydrolyzes ADP (Fig. 1C). PI3K-arrestin-2 complexes were detected in platelets stimulated with thrombin, PAR4 agonist peptide, and to a lesser extent, PAR1 agonist peptide (Fig. 1C). Given that thrombin-dependent PI3K-arrestin association is inhibited by SFK inhibitors, we also tested whether SFKs were incorporated into complexes with PI3K and arrestin-2. Fig. 1D shows that Lyn co-precipitates with arrestin-2 and PI3K upon thrombin stimulation and that thrombin-dependent association of Lyn with PI3K was inhibited by antagonists of P2Y12 and SFKs. Fyn and Src were not detected as part of the complexes (additional data not shown).
Deletion of Arrestin-2 Reduces Platelet Sensitivity to Thrombin, but Not ADP, Stimulation-The thrombin-stimulated association of PI3K with arrestin-2 suggests that arrestin-2 may regulate PI3K-dependent signaling events. Therefore, to determine whether arrestin-2 regulates Akt phosphorylation induced by thrombin receptor activation, Akt phosphorylation induced by the PAR4-activating peptide AYPGKF was evaluated in mouse platelets lacking arrestin-2 compared with WT control mice. The results show that arrestin2 Ϫ/Ϫ platelets have a reduced sensitivity (right shift in dose-response curve) to PAR4 peptide-or thrombin-mediated Akt phosphorylation relative to their WT counterparts (Fig. 2, A and B). To FIGURE 1. Arrestin-2 expression and complex formation in mouse and human platelets. A, 2 ϫ 10 7 mouse or human platelets were loaded per lane and immunoblotted with antibody to arrestin-2. B, human platelets were left untreated or stimulated by thrombin (0.1U/ml) for 10 min with or without ARL66096(300 nM) or PP2(50uM), lysed, immunoprecipitated with antibodies to p85-PI3K(Upstate, Temecula CA; 2 g/ml) and immunoblotted with antibodies to arrestin-2 (Santa Cruz Biotechnology, 1:000). C, platelets were stimulated for 10 min with ADP(10 M), thrombin(0.1 units/ml), peptides AYPGKF(150uM), or SFLLRN(5 M), with or without apyrase (1 unit/ml); then lysed and immunoprecipitated with antibody to arrestin-2 and immunoblotted with anti-p85-PI3K. D, human platelets treated with ADP or thrombin as in C, with or without ARL66096 (300 nM), A3P5PS(300 M), or PP2(50uM) were immunoprecipitated with antibody to p85-PI3K (2 g/ml) and immunoblotted with antibodies to arrestin-2, Lyn kinase, or p85-PI3K. Each of the figures shown is representative of results from a minimum of three separate experiments.
Arrestin-2 Supports PAR-4 Signaling
FEBRUARY 4, 2011 • VOLUME 286 • NUMBER 5 FIGURE 2. Akt phosphorylation and fibrinogen binding in response to PAR4 agonist or thrombin in WT and arrestin2 ؊/؊ platelets. A, platelets (2 ϫ 10 7 /lane) from WT or arrestin-2 Ϫ/Ϫ mice were stimulated for 5 min at 37°C with the indicated concentration of AYPGKF, lysed, resolved by SDS-PAGE, and immunoblotted with phosphospecific antibody to p-Akt473 or total Akt. B, average Ϯ S.E. of three or more experiments at each concentration as in A, quantified by densitometry, is shown. White bars are WT, black are arrestin-2 Ϫ/Ϫ . * indicates a significant difference between arrestin-2 Ϫ/Ϫ and WT platelets is detected by 2-tailed, paired Student's t test, with p Յ 0.05. C, platelets from WT or arrestin-2 Ϫ/Ϫ mice (4 ϫ 10 7 /ml) were stimulated with the indicated concentration of AYPGKF together with AlexaFluor488-conjugated fibrinogen, then fixed and analyzed by flow cytometry. Shown is the mean fluorescence intensity, averaged over three experiments Ϯ S.E. * indicates significant difference between arrestin-2 Ϫ/Ϫ and WT platelets is detected by 2-tailed, paired Student's t test, with p Յ 0.05.
Arrestin-2 Supports PAR-4 Signaling
determine whether Akt phosphorylation might affect platelet function, fibrinogen binding was evaluated in arrestin2 Ϫ/Ϫ platelets. Arrestin2 Ϫ/Ϫ platelets also displayed reduced sensitivity to fibrinogen binding relative to wild-type control platelets (Fig. 2C), likely reflecting the reduced sensitivity to Akt phosphorylation, which has been shown to promote fibrinogen binding (10).
PAR4-dependent Akt phosphorylation has been demonstrated to be largely dependent on the presence of ADP (12). Therefore, we considered that arrestin might be required for ADP signaling to Akt and influence PAR4 signaling as a secondary consequence. To our surprise, the concentration-response curves for Akt phosphorylation stimulated by ADP did not differ between arrestin2 Ϫ/Ϫ mice and WT control mice (Fig. 3, A and B). There was also no difference in the concen-tration response curves for fibrinogen binding between arres-tin2 Ϫ/Ϫ platelets and WT mice (Fig. 3C). Taken together, the results in Figs. 2 and 3 suggest that PAR4 and P2Y12-dependent signaling to Akt occur through different mechanisms: PAR4-dependent Akt phosphorylation is partially dependent on arrestin-2, whereas P2Y12 signaling to Akt is not.
PAR4 Co-localizes and Associates with Arrestin-2 and PI3K upon Thrombin Stimulation-Our co-immunoprecipitation studies show that thrombin stimulates association of arrestin-2 and PI3Ks (Fig. 1). Because arrestins commonly interact directly with G protein-coupled receptors, we wondered whether arrestin-2 was recruited to the PAR4 thrombin receptor upon its activation, in turn recruiting PI3K to the receptor. We first tested this hypothesis using immunofluorescence microscopy of megakaryocytes differentiated in culture from ES cells (see Ref. 16). Megakaryocytic cells were incubated in the presence or absence of thrombin, then permeablized and immunostained with antibodies to PAR4, arrestin-2 or p85-PI3K. Although diffuse staining of PAR4 is seen in unstimulated cells, PAR4 co-localizes in discrete domains with arrestin-2 ( Fig. 4A) and PI3K (Fig. 4B) upon thrombin stimulation. Little or no immunostaining of PAR4 or arrestin-2 was evident in the absence of Triton X-100 to permeablize the cells (additional data not shown). The PAR4 antibody used for immunostaining was directed against amino acids 180 -300, spanning transmembrane domains 4 through 6, including the 3rd and 4th intracellular loops of PAR4. These data suggest that PAR4, PI3K, and arrestin-2 are colocalizing within an endocytic compartment upon thrombin stimulation.
To verify that arrestin-2 is recruited to PAR4 in platelets, rather than solely in a megakaryocyte model system, we also evaluated their association using a co-immunoprecipitation approach from human platelets. Our co-immunoprecipitation studies show that thrombin-induced association of PI3K with arrestin-2 is dependent on ADP; therefore, we tested whether the association was dependent on the P2Y1 or P2Y12 ADP receptors. Human platelets were stimulated with PAR4 agonist peptide or thrombin in the presence or absence of P2Y1 or P2Y12 antagonists, then immunoprecipitated with antibody to arrestin-2 and immunoblotted for PAR4. Stimulation of platelets with either PAR4 agonist (upper blot) or thrombin (lower blot) induces association of arrestin-2 with PAR4. The association is blocked by two different antagonists for P2Y12 (MeSAMP or ARC69931MX) or P2Y1 (A3P5PS or MRS2179) (Fig. 4C). Association of arrestin-2 with PAR4 is also evident in thrombin-stimulated cells immunoprecipitated with PAR4 and immunoblotted for arrestin-2 (Fig. 4D).
Arrestin Recruitment to PAR4 Is Dependent upon P2Y1stimulated PKC Activation-Whereas a role for P2Y12 in arrestin association with PAR4 is not unexpected given that both P2Y12 and arrestin are required for maximal Akt phosphorylation by thrombin, the requirement for P2Y1 in arrestin recruitment was unforeseen. P2Y1 is a G q -coupled receptor, activation of which stimulates phospholipase C2, leading to protein kinase C (PKC) activation and release of calcium from the dense tubular system. To determine whether PKC was important for arrestin association with FIGURE 3. Akt phosphorylation and fibrinogen binding in response to ADP in WT and arrestin-2 ؊/؊ platelets. A, platelets from WT or arrestin-2 Ϫ/Ϫ mice were stimulated with the indicated concentration of ADP and immunoblotted for p-Akt473 or total Akt as in Fig. 2. B, average Ϯ S.E. of three experiments as in A, quantified by densitometry is shown. C, platelets from WT or arrestin-2 Ϫ/Ϫ mice were stimulated with the indicated concentration of ADP and analyzed for fibrinogen binding by flow cytometry as in Fig. 2. Shown is the mean fluorescence intensity, averaged over three experiments Ϯ S.E. PAR4, PAR4-stimulated co-immunoprecipitation of PAR4 and arrestin-2 was tested in the presence of various PKC inhibitors. Akt phosphorylation was also tested under the same conditions. The broad spectrum PKC inhibitor staurosporine blocks arrestin association with PAR4, as well as PAR4-dependent Akt phosphorylation (Fig. 5, A and B). Similarly, the broad-spectrum inhibitor Go6983, which inhibits both classical PKC isoforms (␣, , and ␥) and atypical, non-Ca 2ϩ -dependent isoforms (␦ and ), also decreased PAR4-arrestin association and Akt phosphorylation. In contrast, the PKC inhibitor Go6976, selective for classical isoforms PKC␣ and PKC, did not. These data suggest that arrestin recruitment to PAR4 is dependent upon the non-Ca 2ϩ dependent, atypical class of PKCs. However, incubation with PMA did not stimulate Akt phosphorylation, implying that PKC is required, but not sufficient for PAR4-arrestin association and Akt phosphorylation. Interestingly, P2Y12 has been shown to enhance PKC phosphorylation through inhibition of DAGkinase (18). Therefore, P2Y12 and P2Y1 may both contribute to arrestin recruitment via PKC-dependent phosphorylation of PAR4.
Maximal PAR4-induced Akt Phosphorylation Requires both P2Y1 and P2Y12-The results shown in Fig. 4B implicate a role for P2Y12, in addition to P2Y1, in arrestin-2 recruitment to PAR4. To understand the relative roles of P2Y1 and P2Y12 in arrestin signaling to Akt, Akt phosphorylation was compared at 1, 3 and 5 min after PAR4 stimulation in the pres-ence and absence of P2Y1 and P2Y12 inhibitors in WT and arrestin-2 knock-out mice. An average of three experiments using the P2Y12 inhibitor MeSAMP and P2Y1 inhibitor A3P5PS is shown in Fig. 6A, while single representative ex- D). A, mouse megakaryocytes were differentiated in culture, grown on Fluorodishes, and incubated in the presence (upper panels) versus absence (lower panels) of thrombin (2 units/ml) for 10 min at 37°C. Cells were then incubated with FITC-conjugated antibody to PAR4 and rhodamine-conjugated antibody to p85-PI3K (A) or rhodamin-conjugated antibody to arrestin-2 (B), fixed, and slides evaluated at 40ϫ magnification on an Olympus Confocal microscope. 2-Color merge is shown in yellow. C, human platelets (4 ϫ 10 8 /lane) were treated with AYPGKF (150uM) for 5 min at 37°C with or without 2MeSAMP (100 M), A3P5PS (300 M), ARC69931MX (300 nM), or MRS2179 (100 M), then immunoprecipitated with either IgG control or antibody to arrestin-2. Precipitates were immunoblotted with anti-PAR4 or -arrestin-2 antibodies. D, human platelets were incubated with/without thrombin (0.1 unit/ml) for 5 min at 37°C, then immunoprecipitated with antibody to PAR4 and immunoblotted with antibodies to arrestin-2 or PAR4. periments using inhibitors MeSAMP and A3P5PS, or ARC69931MX (P2Y12 inhibitor) and MRS2179 (P2Y1 inhibitor), are shown in Fig. 6B. Consistent with the unforeseen role of P2Y1 in arrestin recruitment to PAR4, a role for P2Y1 in Akt phosphorylation is evident at both 3 and 5 min, as Akt phosphorylation is inhibited by A3P5PS or MRS2179 at these time points (p Ͻ 0.05 at 3 min, p Ͻ 0.001 at 5 min ANOVA with Bonferroni post-test analysis). In arrestin-2 knock-out mice, the degree of Akt phosphorylation at 3 or 5 min is comparable to that of WT platelets treated with A3P5PS. In addition, no inhibition of Akt phosphorylation by A3P5PS or MRS2179 was seen in arrestin2 Ϫ/Ϫ platelets at these time points, suggesting that the role of P2Y1 in Akt phosphorylation is mediated by arrestin-2 (difference is not significant by Bonferroni post-test). In contrast, P2Y12 appears to play some arrestin-independent roles in Akt phosphorylation, since the inhibition of P2Y12 reduces Akt phosphorylation even in the absence of arrestin-2 (p Ͻ 0.001 at 5 min) (Fig. 6). This reveals an arrestin-independent role for P2Y12 in addition to the role in arrestin recruitment evident from Fig. 4.
. Immunofluorescence localization of PAR4, arrestin-2, and p85-PI3K (A and B) and co-immunoprecipitation of PAR4 and arrestin-2 (C and
P2Y12 Directly Associates with PAR4 after Thrombin Stimulation of Human Platelets-To address the mechanism by which P2Y12 contributes to arrestin recruitment to PAR4, we considered recent evidence demonstrating oligomerization of P2Y12 receptors in platelets (19). We hypothesized that P2Y12 may physically associate with PAR4 and that the heterodimer or oligomer may present a site that facilitates arres-tin binding. Previous work has shown that arrestin-2 facilitates the internalization of P2Y12 (20); therefore, the association of P2Y12 with PAR4 may simply recruit the P2Y12-associated arrestin-2 to the same complex. To determine whether PAR4 and P2Y12 physically associate in platelets after agonist stimulation, human platelets were stimulated with thrombin or PAR4 agonist, lysed, then immunoprecipitated with antibody to P2Y12 (Fig. 7A) or PAR4 (Fig. 7B) (the entire blot is shown in the supplemental Fig. S1). The precipitates were immunoblotted for PAR4 or P2Y12, respectively. Fig. 7 shows that PAR4 associates with P2Y12 after thrombin or PAR4 stimulation of human platelets. The association is reduced by P2Y12 antagonist. A slight association is detected in platelets stimulated with ADP. These results suggest that P2Y12 and PAR4 form agonist-dependent heteromers in platelets, consistent with the idea that the physical association of P2Y12 with PAR4 helps to recruit arrestin-2 to PAR4.
Arrestin-2 Is Important for Thrombus Formation in a Carotid Artery Injury Model-Akt is important for the formation and maintenance of stable occlusive thrombi in mice (10). To determine whether arrestin-2 contributes to thrombus formation in mice, a ferric chloride-induced carotid artery injury model was used. Ferric chloride was applied for 2 min, 15 s to carotid arteries of wild type or arrestin2 Ϫ/Ϫ mice and the number of mice forming stable thrombi that impeded flow rate for 30 min was recorded. A graph of the results is shown in Fig. 8A. Those mice forming thrombi that resolved before Fig. 2A. Immunoblots were scanned by densitometry and expressed as % maximal Akt phosphorylation detected in WT platelets stimulated for 5 min (A). The average of four experiments Ϯ S.E. is shown. A significant difference between drug-treated and untreated sample at that time point is denoted by * with p Յ 0.05 or ** with p Յ 0.01 using 2-way ANOVA with Bonferroni post-test analysis. Representative immunoblots probed with phospho-Akt antibody (upper blots) and re-probed with antibody to actin (lower blots) are shown in B. Also shown are representative immunoblots of AYPGKF-stimulated platelets in the presence or absence of ARC69931MX (300 nM), or MRS2179 (100 M) to inhibit P2Y12 and P2Y1, respectively. The same results were obtained in two additional experiments using these inhibitors.
the end of the 30 min assay period were scored as having unstable thrombi. In wild-type mice, 73% of mice assayed formed stable occlusions compared with 18% of arrestin2 Ϫ/Ϫ mice, indicating that arrestin2 Ϫ/Ϫ mice have a statistically significant reduction in stable occlusive thrombus formation under these conditions (p ϭ 0.03, two-tailed Fisher's exact probability test). Because this is a milder difference in total occlusion than that we have previously reported for mice lacking Akt2, we also assessed whether time to thrombus formation differed between wild type and arrestin2 Ϫ/Ϫ mice when mice were exposed to a lower concentration of ferric chloride (5%) for a longer time (3 min). The results show a significant difference in time to occlusive thrombus formation in 10 mice forming thrombi of each genotype ( Fig. 8B; p ϭ 0.005, unpaired two-tailed Student's t test).
DISCUSSION
Arrestins can positively or negatively regulate distinct aspects of cellular function (2,21), but the roles of arrestins in platelet function remain uncharacterized. Of the two nonvisual arrestins, arrestin-2 is more easily detected via immunoblot analysis ( Fig. 1A and additional data not shown), and its mRNA is more readily detectable in the platelet transcriptome (22). Small amounts of arrestin-3 may also be present and may provide some compensatory regulation in the absence of arrestin-2. Both single knock-out mice are viable (15,23), but arrestin-2/arrestin-3 double knock-out mice die in utero (24). Arrestin-2 knock-out mice have few physiological defects, but display increased sensitivity to -adrenergic stimulation in the heart, suggesting a role in desensitization of cardiac responses to -agonists (15). To determine whether arrestin-2 might regulate platelet signaling or function, we evaluated arrestin complex formation in human and mouse platelets and the effects of arrestin-2 loss on mouse platelet function in vitro and in vivo.
The results show that activation of PAR4 stimulates association of the p85 regulatory subunit of PI3K with arrestin in a manner dependent on P2Y12 and SFKs. Lyn is incorporated into the complexes, suggesting that this is the relevant Src kinase that contributes to arrestin-dependent signaling downstream of PAR4 and likely explains the role of Lyn in contributing to thrombin-dependent Akt phosphorylation and secretion noted by Cho et al. (17). We propose a mechanism in which arrestin-2 is recruited to activated PAR4 and in turn helps to recruit Lyn complexed with PI3K (see Fig. 9 for diagrammatic summary of the signaling mechanism). ADP contributes to arrestin recruitment to PAR4, since inhibition of either P2Y12 or P2Y1 reduces arrestin association with PAR4. This work has uncovered a unique and surprising role for P2Y1 in contributing to arrestin recruitment to PAR4, which may partially explain the unexpected effect of P2Y1 inhibition on aggregation induced by low concentrations of thrombin (25). These data would suggest that P2Y1 should also affect Akt phosphorylation, which has not been reported previously. In fact, a role for P2Y1 in Akt phosphorylation is evident at 3 1000). B, human platelets treated as above were immunoprecipitated with antibody to PAR4 (2 g/ml), then immunoblotted with antibody to P2Y12 (1:1000).
FIGURE 8. Arterial thrombus formation in WT and arrestin-2 ؊/؊ mice.
A, 10% ferric chloride-soaked filter paper was applied for 2 min and 15 s (2Ј15") to carotid arteries of pentobarbitol-sedated wild type or arrestin-2 Ϫ/Ϫ mice and the carotid arterial flow rate was measured using a Doppler flow probe. The percentage of each genotype forming stable thrombi that completely impeded flow rate for 30 min is shown in black, the percentage forming unstable thrombi are shown in gray, and the percentage with no occlusion is shown in white. The number of stable occlusions formed differs between WT and arrestin-2 Ϫ/Ϫ mice, with p ϭ 0.03 (two-tailed Fisher's exact probability test). The results of 11 WT and 11 arrestin-2 Ϫ/Ϫ mice are shown. B, 5% ferric chloride-soaked filter paper was applied for 3 min to carotid arteries of sedated mice and flow rate was recorded as described: time to occlusive thrombus formation was recorded for 10 mice of each genotype. The mean time to occlusions differ between the two genotypes of mice with p ϭ 0.005 (unpaired 2-tailed Student's t test). and 5 min post-PAR4 stimulation (Fig. 6). The reduction in PAR4-mediated Akt phosphorylation due to P2Y1 antagonism is smaller than that due to blockade of P2Y12 and is overcome at higher agonist concentrations, as is the case with arrestin-2 deletion. This may explain why no effect of P2Y1 antagonist on Akt phosphorylation was observed previously (26,27). Fig. 4 suggests that arrestin recruitment to PAR4 is reduced by broad-spectrum PKC inhibitors, but not by classical PKC inhibitors alone: these results may suggest that a unique non-Ca 2ϩ -dependent PKC isoform is stimulated by P2Y1, which plays a role in PAR4 phosphorylation to allow arrestin recruitment.
The role of P2Y12 in Akt phosphorylation is not limited to arrestin-dependent signaling, since P2Y12 inhibition further reduces Akt phosphorylation in arrestin-2 Ϫ/Ϫ platelets (Fig. 6). P2Y12 may play a direct role in Lyn activation, for example: Src family members have been found to associate with G i family members (28 -30) and G protein-coupled receptors (31,32). P2Y12 is thus involved in both arrestin-independent and -dependent signaling, because P2Y12 also plays a role in arrestin recruitment to PAR4 (Fig. 4). The recent observation that P2Y12 receptors form homo-oligomers in platelets (19) suggested to us that P2Y12 may help to recruit arrestin-2 to PAR4 by forming a heterodimer or oligomer that facilitates arrestin binding. The idea that GPCR dimers may be required for arrestin-dependent signaling has precedent in both the muscarinic and ␣-adrenergic receptor systems (33,34). In fact, we have detected the agonist-dependent association of PAR4 with P2Y12 in human platelets using an immunoprecipitation approach (Fig. 7). Taken together with a previous study showing that arrestin-2 facilitates internalization of P2Y12 (20), these data suggest a model in which agonist stimulation of PAR4 recruits P2Y12 pre-complexed with arrestin-2.
It is clear that PAR4-dependent signaling to Akt activation and fibrinogen binding is not solely due to ADP release, because PAR4 and ADP-induced signaling are differentially sensitive to arrestin-2. This study uncovers a unique role for arrestin-dependent PAR4 signaling to Akt, for which P2Y12 signaling alone is insufficient. It is worth noting that these experiments have been done primarily with thrombin and PAR4 peptide agonists, due to the capability to compare responses in arrestin-2 knock-out mice (PAR1 is not expressed in mouse platelets); whether arrestin-2 is required for signaling downstream of PAR1 in platelets is still unresolved.
Analysis of thrombus formation using a ferric chloride arterial injury model reveals that arrestin-2 positively regulates thrombus formation in vivo. This would seem to reflect its role in supporting PAR4 signaling to Akt, since arrestin-2 did not affect ADP-induced fibrinogen binding. The defect in thrombosis in arrestin-2 knock-out mice appears milder than that previously observed in Akt2 Ϫ/Ϫ mice under similar conditions, which is consistent with the notion that arrestin-2 is only partially responsible for Akt phosphorylation by PAR4. While clearly arrestins can mediate desensitization of receptor signaling in some contexts, ADP-induced fibrinogen binding and Akt phosphorylation were not significantly affected by the loss of arrestin-2. Furthermore, the positive role played by arrestin-2 in the thrombosis model suggests that its role in recruiting PI3K complexes is more important in thrombus formation than any potential role in desensitizing platelet receptors for ADP or other agonists. Alternatively, these results may reflect the largely thrombin-dependent nature of the ferric chloride injury model, which may be particularly sensitive, and thus somewhat biased toward detecting defects in PAR4dependent pathways. Despite this caveat, the model reveals that arrestin-dependent signaling can play important positive roles in regulating thrombus formation in vivo.
|
v3-fos-license
|
2018-04-03T01:45:35.502Z
|
2016-04-30T00:00:00.000
|
15783808
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.jstage.jst.go.jp/article/jea/26/11/26_JE20150252/_pdf",
"pdf_hash": "098eaadc4466bcf2b2c95da6bf967df1ab4a8202",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:768",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "098eaadc4466bcf2b2c95da6bf967df1ab4a8202",
"year": 2016
}
|
pes2o/s2orc
|
Effects of Individual, Spousal, and Offspring Socioeconomic Status on Mortality Among Elderly People in China
Background The relationship between socio-economic status and health among elderly people has been well studied, but less is known about how spousal or offspring’s education affects mortality, especially in non-Western countries. We investigated these associations using a large sample of Chinese elderly. Methods The data came from the Chinese Longitudinal Healthy Longevity Survey (CLHLS) from the years 2005 to 2011 (n = 15 355, aged 65–105 years at baseline; 5046 died in 2008, and 2224 died in 2011). Educational attainment, occupational status, and household income per capita were used as indicators of socio-economic status. Spousal and offspring’s education were added into the final models. The Cox proportional hazards model was used to study mortality risk by gender. Results Adjusted for age, highly educated males and females had, on average, 29% and 37% lower mortality risk, respectively, than those with a lower education. Particularly among men, this effect was observed among those whose children had intermediate education only. A higher household income was also associated with lower mortality risk among the elderly. Male elderly living with a well-educated spouse (HR 0.79; 95% CI, 0.64–0.99) had a lower mortality risk than those living with a low-educated spouse. Conclusions Both the socio-economic status of the individual and the educational level of a co-resident spouse or child are associated with mortality risk in elderly people. The socio-economic position of family members plays an important role in producing health inequality among elderly people.
INTRODUCTION
The inverse association between socioeconomic status and health has been well established. [1][2][3][4][5] For all intents and purposes, people with a higher socioeconomic status have universally been found to have better health as measured on various indicators, such as self-rated health and mortality. However, much less is known about whether the socioeconomic status of the spouse or offspring affects the health and longevity of the partner or parent, respectively. Some studies indicate that spousal and offspring's education have significant effects on an individual's mortality, but these were all conducted in high-income countries. [6][7][8][9][10][11][12] Little is known about how the socio-economic status of other family members affects health in non-Western countries, notwithstanding that the role of family ties and obligations may be more important to the health of elderly in this cultural context.
In contrast to Western societies, in which most elderly people live separately from their adult children, co-residence with family members at old ages is still common in China, where filial piety is still considered to be one of the fundamental values ensuring familial harmony and development. 13 Most previous studies consider socioeconomic status an individual-level rather than a family-level resource. [14][15][16] Conceptualized as a family-level resource, the health of the elderly depends not only on their own socioeconomic status but also on that of their family members. It is likely that the educational levels of family members will have a stronger association with mortality among the elderly in China than has been observed in other countries because of the commonness of intergenerational co-residence. Several studies have examined the effect of elderly Chinese people's own socioeconomic status on their mortality risk, 17,18 but the extent to which the socioeconomic status of other family members, in particular spouses and children, does or does not affect their mortality is still unclear.
This study investigated whether and to what extent spousal and offspring's education influences mortality among elderly males and females, net of the individual's own socioeconomic status. In addition, we examined the interaction effects between the educational levels of elderly people and of their children to assess the joint contribution of these socioeconomic factors on mortality and thereby assess whether a high level of education among offspring can offset the effects of low parental education.
METHODS Data
We used data from the Chinese Longitudinal Healthy Longevity Survey (CLHLS), which was conducted by the Centre for Healthy Aging and Family Studies at Peking University. CLHLS was based on longitudinal survey data gathered via internationally compatible questionnaires from large samples focusing on healthy longevity among the elderly in China. The survey was initiated in 1998 based on a randomly selected sample of older Chinese adults from 22 of the 31 provinces of mainland China, which account for about 85% of the total population of mainland China. 17 The first two surveys mainly targeted those aged 80 years and over, and the younger elderly (aged 65 years and above) were added from the 2002 wave. The method to select younger elderly was similar to that of selecting those aged 80 years and above. A follow-up face-to-face interview survey was conducted every 2 or 3 years. The survey contained extensive information on Chinese elderly people, including socio-economic position, family structure and background, living arrangements, daily activities, and health condition. Dates of death were validated based on death certificates and confirmation from relatives. We obtained permission from the Centre for Healthy Aging and Family Studies at Peking University to use the data.
Children's education was added into the survey starting in 2005. We therefore selected the sample of elderly people aged between 65 and 105 years in 2005 as our baseline. The analytical baseline sample used in this study comprised 15 355 respondents. Of these, 5046 died, 2899 were lost to follow-up, and 7410 survived to 2008. By the year 2011, 2224 had died, 1017 had been lost to follow-up, and 4169 survived. Altogether, these respondents yielded 26 935 person-years of records during the nearly 6-year study period.
Measures
Respondent socioeconomic status was measured using the highest levels of educational attainment, occupational status, and household income per family member. Education was measured in years of schooling in the data. Because nearly half of the elderly had not had any formal education, it was recoded in three categories: low (no schooling, 0 years), intermediate (primary school, 1-6 years), and high (middle school or more, 7 years or more). Occupational status before the age of 60 years was classified into three categories: farmers, white-collar workers (including professional and technical personnel, governmental, institutional or managerial staff, and military personnel), and others. Household income per capita (household total income divided by the number of co-resident family members) was divided into quartiles.
Given the collinearity between spousal education and living arrangements, these variables were recombined into the following categories: 1) low education (0 years), living with a spouse; 2) intermediate education (1-6 years), living with a spouse; 3) high education (7 years or more), living with a spouse; 4) no co-resident spouse.
The co-resident adult children's education was classified into five categories, which differed slightly from the categories of parental education: a low education included no education and primary school (0 years or 1-6 years), intermediate education included those who attended middle school (7-9 years), and high education indicated upper-secondary education or above (10 years or more). In the case of elderly people living with more than one child, educational attainment reflected the attainment of the most highly educated.
The covariates in this study included residential area (1 = rural area, 0 = urban area); self-rated health (good, fair, or poor); smoking status (current smokers, past smokers, or never smokers); exercise ("Do you exercise regularly at present?"; 1 = yes, 0 = no). Residential area was based on information on the Chinese 'Hukou' household registration system; in rural areas, agriculture is an important economic activity, whereas in urban areas, including cities and towns, agriculture is less common. Considering that the proportion of missing values was less than 1% for all variables, those with missing information were categorized separately.
Statistical methods
We first derived the descriptive statistics and age-adjusted death rates (number of deaths per 10 000 person-years) stratified by gender. We then estimated the multivariate Cox proportional hazards model to study mortality. All the analyses were conducted separately for males and females, given that mortality risk varied by gender. Survival time was calculated in days from the date of the first interview in 2005 to that of the last interview in 2011 for survivors, and to the date of death for the deceased. In the case of those who were lost to follow-up between the different waves, survival time was the number of days from the first interview date in 2005 to the last known interview date.
We estimated four different models for males and females and reported their hazard ratios (HRs) and 95% confidence intervals (CIs) in the tables. Model 0 is an age-adjusted model with each independent variable included separately. The respondents' socioeconomic-status indicators (education, occupational status, and household income per capita), age, and residential area were included simultaneously in model 1. Next, spousal and offspring's education were added in model 2 to assess the extent to which the effect of the elderly person's socioeconomic status on mortality was mediated by the educational level of their spouse or adult children. This approach was used because we found that the individual's education was associated with all other indicators of socioeconomic position (eTable 1). Finally, we included the selfrated health and health-behavior variables (ie, smoking status and exercise) in the final model 3. These variables were considered as mediating variables that are possible on the causal pathway between the socio-economic variables and mortality. We also present the interaction effects between the parent's and the children's education in predicting ageadjusted mortality risk. Ethical approval was not required, as this study was a secondary analysis of open-access data and there was no individual identification information in the data. All analyses were performed using Stata 11.2 (Stata Corp, College Station, TX, USA). Table 1 presents the descriptive statistics and age-adjusted death rates for males and females. Men were more highly educated than women, with only about 4% of women having a high education. The proportions of male and female farmers were about 56% and 64%, respectively. Overall, the distributions of per capita household income and children's education were similar among males and females. The varying distributions of education were also reflected in spousal education; for example, 33% of the males had a spouse with a lower level of education, which was almost five times higher than the rate among the spouses of females. Men had higher age-adjusted mortality rates than women for all variables. Table 2 and Table 3 present the HRs from the Cox proportional hazards model predicting mortality among males and females, respectively. Education was inversely associated with mortality risk among both males and females. Among males, those with an intermediate or high education had a nearly 13% lower mortality risk than those with a low education (model 1). A high household income had a significant effect on mortality risk; for instance, the risk of death among elderly men in the third and highest householdincome quartiles was 37% and 48% lower, respectively, than among those in the lowest income quartile (model 1). When both spousal and offspring's education were added in model 2, the effect of education on mortality weakened but remained significant. The effect of household income changed slightly. Spousal and offspring's education also had a protective effect on reducing older people's mortality risk: the risk of death among those whose spouse had an intermediate or a high compared to a low education was 20% and 21% lower, respectively (model 2). The HR for those living with children educated to a high level was 0.83 (95% CI, 0.74-0.92) (model 2). When self-rated health and health-related behaviors (ie, smoking and regular exercise) were included in model 3, the effect of the elderly person's education continued to decline but the HRs of spousal and offspring's education did not change much (model 3). This suggests that the effects of relative educational variables on mortality are not mediated through self-rated health and health-related behaviors.
RESULTS
The effect of education on mortality risk was slightly different among females than males. Females with an intermediate education had a 9% lower risk of death than those with a low education, but the differences between a high and a low education were not significant (model 1). When spousal and offspring's education were added in model 2, the effects of educational level declined and became statistically non-significant. Among males, a higher household income reduced the mortality risk of the elderly, which nevertheless remained statistically significant. Net of their own education, those whose co-resident children had a high level of education had a 19% lower mortality risk compared to a low education level (model 2). When all the covariates were added in the final model, the effects of education and occupational status remained non-significant, whereas the effect of household income was still significant. Table 4 shows the extent to which the results from the interaction effects between the elderly parent's education and their children's education predicted mortality for men and women separately. Older males with a low education faced a 31% lower risk of death if their co-resident child had a high education. Among men with a high education, the HR was 0.70 (95% CI, 0.57-0.85) if their co-resident children had a high level of education. Of note, elderly men whose child had low education had 13% higher mortality and elderly women had 8% higher mortality if elderly participants had high education compared to low education (ie, the association between an individual's own education and mortality was reversed compared to the other categories of children's education). The interaction effect was statistically significant only for males (P = 0.002 for males and P = 0.15 for females).
DISCUSSION
Consistent with findings from previous studies, our results confirm the strong association between higher household income and mortality among the elderly. [19][20][21] Economic resources, such as family income, consistently and significantly affect mortality risk in older people in China. Higher household income also predicted a lower mortality risk after adjustment for other covariates, such as the individual's own socioeconomic status and health-related behaviors. Enhanced economic resources could give the elderly access to a better quality of life and adequate medical care and services. 22,23 The effect of income on mortality among the older people investigated in this study turned out to be stronger than has been observed in studies among the elderly in other high-income countries. 7,24-26 Most significantly, our results indicate an association between higher offspring's education and a lower mortality risk among older people, and this association was especially strong among elderly males. Simultaneous adjustment for an individual's own socioeconomic status or that of their offspring partly attenuated these effects. We also found that elderly with higher education and with highly educated children had lower mortality. However, the protective effects of higher education tended to be most pronounced among elderly who had children with intermediate-level education, particularly among elderly men. Because of these interactions, the main effect of an individual's own education on male mortality should be interpreted with caution, as its effects may vary according to the education of co-resident children. Overall, our results indicate that it is not only the individual's own socioeconomic status that affects health in older people, but also the educational level of the individual's spouse and offspring. This suggests that, to some extent, education is a household-level rather than a purely individual-level resource. 27 For both males and females, those who had no co-resident children had lower hazard ratios of mortality. We interpret this to be the result of health selection, in which the healthiest elderly person can live alone, but those who need help with their daily tasks are more likely to live with their children to get care.
Marriage also had a protective effect on health. Elderly people living with a spouse had a lower mortality risk than those without a spouse, and those living with a highly educated spouse had lower risk than those with a spouse with only a basic education. For example, among the males, those living with a highly educated spouse had a 21% lower mortality risk in the fully adjusted model. Our results are consistent with previous findings from England, Sweden, Norway, and Israel indicating that one's partner's education is significant as a predictor of one's own mortality risk. 6,8,28,29 One possible explanation is that married men and women can share economic resources and give one another social and emotional support. 9,30 High education among the women lowered the mortality risk among their husbands more significantly (P < 0.05) than vice versa. The fact that highly educated women tend to show better health-related and lifestyle behaviors that benefit the health of their husbands might explain this effect. 9,28 However, there is still a need for further investigation in the Chinese context.
Having highly educated adult children is consistently associated with a lower risk of parental death in welfare states, such as the Nordic countries, 11,12 where social services for the elderly are strongly supported and most adult children do not live with their parents. We demonstrated a similar association in China, where socioeconomic disparity is increasing, public services for the elderly are moderate, and co-residence with adult children is common. We found in our study that elderly males and females living with a highly educated child had a roughly 15% lower mortality risk than those living with a child educated to a low level. A Swedish study found that people living with a child educated to the tertiary level had a similarly lower mortality risk as those whose co-resident child received only compulsory education. 11 Although social policy is comparatively egalitarian in welfare societies, and governments support equality in the provision of public healthcare to the elderly, upward intergenerational exchange and support remain strong, and offspring's education still has a strong effect on their parents' health. 11,31 In China, on the other hand, where government can only supply basic healthcare to increasing numbers of older people, children take the main responsibility for the care of their ill and aging parents, especially in rural areas. 32,33 We found that spousal and offspring's education are equally important for the health of elderly people. From the perspective of policy, efforts to increase the educational level of all people in the future may help to improve health and reduce the mortality of the elderly and reduce health inequality in the long run.
We also found that parents with a higher education tended to benefit more from their highly educated children, which is consistent with the results of earlier research conducted in Taiwan. 27 Children or other family members still seem to be the main organizers, suppliers, and financiers of healthcare for the elderly in Chinese societies, in which family values and responsibility are highly respected. Highly educated children can afford better medical care and services and have better access to health-related knowledge, to the benefit of their parents' health. 11,23 Highly educated elderly parents living with highly educated children may more readily take advantage of the resources they contribute than old people with a low level of education. 27 Limitations of this study should also be noted. The first is that we only had sufficient power to analyze overall mortality risk by gender. Some previous studies indicate that spousal education has a different effect on cardiovascular disease (CVD) mortality than on other types of cause-specific mortality, 8,34 but we could not analyze this because of the limited data. Another concern is that the measurements of health and health-related behaviors may be inaccurate, and we also lack information on health-related behaviors among family members. Older people's health may be affected by health-related behaviors, such as smoking; highly educated spouses and children tend to be less likely to smoke, which could influence their co-resident partner's or parent's health behavior.
Overall, the socioeconomic resources of family members play an important role in producing health inequality among elderly people. Our results, obtained from analysis of extensive and representative longitudinal data in China, provide strong evidence of an effect of spousal and offspring's education on partners' and parental mortality risk, respectively. Our findings indicate that a higher education among family members plays a significant role-in addition to individual socioeconomic resources-in reducing elderly people's mortality risk. Hence, enhancing the socioeconomic status of offspring may help to reduce socioeconomic differentials and inequality in health and mortality among elderly people in the future.
ONLINE ONLY MATERIAL eTable 1. Associations between an individual's own education, other socioeconomic status, high spousal education, and high children's education at baseline.
|
v3-fos-license
|
2019-06-19T13:13:34.337Z
|
2019-06-01T00:00:00.000
|
190534292
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2077-0383/8/6/859/pdf",
"pdf_hash": "2d17feff16c56eb28403892c3da42cbf9e26980a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:771",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "a8d08512146f119af92deb647fc3ea663c5c4e9c",
"year": 2019
}
|
pes2o/s2orc
|
Effects of a High Fat Meal Associated with Water, Juice, or Champagne Consumption on Endothelial Function and Markers of Oxidative Stress and Inflammation in Young, Healthy Subjects
Endothelial dysfunction (ED), often linked to hypertriglyceridemia, is an early step of atherosclerosis. We investigated, in a randomized cross-over study, whether high-fat meal (HFM)-induced ED might be reduced by fruit juice or champagne containing polyphenols. Flow-mediated dilatation (FMD) and biological parameters (lipid profile, glycemia, inflammation, and oxidative stress markers) were determined before and two and three hours after the HFM in 17 healthy young subjects (24.6 ± 0.9 years) drinking water, juice, or champagne. Considering the entire group, despite significant hypertriglyceridemia (from 0.77 ± 0.07 to 1.41 ± 0.18 mmol/L, p < 0.001) and a decrease in Low Density Lipoprotein (LDL), the FMD was not impaired. However, the FMD decreased in 10 subjects (from 10.73 ± 0.95 to 8.13 ± 0.86 and 8.07 ± 1.16%; p < 0.05 and p < 0.01; 2 and 3 h, respectively, after the HFM), without concomitant change in concentration reactive protein or reactive oxygen species, but with an increase in glycemia. In the same subjects, the FMD did not decrease when drinking juice or champagne. In conclusion, HFM can impair the endothelial function in healthy young subjects. Fruit juice, rich in anthocyanins and procyanidins, or champagne, rich in simple phenolic acids, might reduce such alterations, but further studies are needed to determine the underlying mechanisms, likely involving polyphenols.
Introduction
It is now well established in animals and humans that endothelial cells have a decisive role in the control of vascular homeostasis. Their protective effect is explained by endothelial cells' ability to release powerful vasoactive factors, and to inhibit the proliferation and migration of vascular smooth muscle
Study Design
This was a randomized, blind, monocentric, cross-over study. In accordance with international recommendations, the patients refrained from strenuous physical activity ≥24 h and fasted for 12 h before the study. Beginning the same hour in the morning to prevent diurnal variation in FMD response, the subjects lay in a quiet room where the temperature was held constant. The high-fat meal consisted of 81 g fat, 101 mg cholesterol, 230 g carbohydrate, and 44 g protein and the subjects' drank either 300 mL of water, fruit juice, or champagne. Each subject was tested three times, the three sets of the study being separated by a 7-day "wash-out" period.
Hemodynamic Parameters
For FMD determination, we measured the diameter of the humeral artery by ultrasonography, according to previous reports [6 -9], using a high-resolution 2-dimensional ultrasound imaging system (ATL HDI 5000; Advanced Technology Laboratories, Bothell, WA, USA) in B-mode. Electrocardiography-triggered ultrasound images were obtained with a high-resolution linear-array transducer . Ultrasound parameters were set to optimize longitudinal B-mode images of the lumen/arterial wall interface.
After a resting period >15 min, the probe was fixed and the patients arm remained in the same position throughout the study. Baseline recording of the arterial diameter was performed and ischemia was obtained using an occlusion cuff inflated to 50 mmHg higher than the patient's baseline systolic blood pressure during 5 min. The same settings were maintained during the study, and FMD was calculated as the largest change in the brachial artery diameter with reperfusion, at the peak of the R wave of the EKG. Diameter was measured at baseline and immediately after cuff deflation, at 20, 40, 60, and 80 s. The FMD, measured before the meal and 2 h (peak bioavailability of polyphenols) and 3 h after eating, was expressed as: Heart rate (HR) and systemic blood pressure were also determined.
Biological Parameters
Blood samples allowed us to determine the kinetic of lipid profile (plasma triglyceride levels, total cholesterol, Low-density lipoprotein (LDL), High-density lipoprotein (HDL), cholesterol), glycemia, and ultrasensitive C reactive protein (CRP) using routine biochemical analysis. Four venous blood samples were drawn before, and 1, 2, and 3 h after the HFM, using a venous line (18 Gauge catheters).
Statistical Analysis
All data are expressed as mean ± standard error of the mean (SEM), and were analyzed using Prism software (GraphPad Prism 5, Graph Pad Software, San Diego, CA, USA). We tested all the parameters for normality assumption in all groups, using the Shapiro-Wilk test. When one parameter was not distributed normally, we performed a non-parametric test on repeated values (Friedman test) followed by a post test (Dunn's multiple test) for the entire data sets (all time points). LDL and glycemia were distributed normally in all groups, and the Bartlett's test demonstrated they meet the homogeneity assumption. We therefore applied a parametric test on repeated values (ANOVA) followed by a post test (Newman-Keuls test). In all cases, a p value <0.05 was considered significant.
Characteristics of the Subjects
The main clinical and biological characteristics of the 17 subjects are presented in Table 1.
Corresponding to the inclusion criteria, they were young and healthy.
FMD Evolution
At the systemic level, we did not observe any significant variations in heart rate or blood pressure during the protocol.
Glycemia did not show significant variations ( Figure 1B) and the high-fat meal did not modify the plasma concentration of ultra-sensitive CRP ( Figure 1C). Similarly, ROS production showed no difference after the HFM ( Figure 1D). However, since individual responses might differ, we investigated them and, thereby, identified a subgroup of 10 subjects out of the 17, in whom the FMD decreased after the high-fat meal. Their characteristics are presented in Table 2. Particularly, knowing that baseline FMD and glycemia might influence subsequent FMD modulation by meal ingestion, we investigated their possible differences in the three sets of the study in the 10 selected subjects.
Biological Effects
Concerning the lipid profile, the HFM induced a significant increase in plasma triglyceride levels (from 0.77 ± 0.07 to 1.29 ± 0.15 (p < 0.01) and 1.41 ± 0.18 mmol/L (p < 0.001), 2 and 3 h post-meal, respectively, Figure 1E). No changes in plasma levels of total and HDL cholesterol were observed ( Figure 1F,G), but there was a gradual decrease in plasma LDL cholesterol levels over time (from 2.33 ± 0.15, to 2.17 ± 0.14 (p < 0.001) and 2.11 ± 0.15 mmol/L (p < 0.001), 2 and 3 h post-meal, respectively) ( Figure 1H). Glycemia did not show significant variations ( Figure 1B) and the high-fat meal did not modify the plasma concentration of ultra-sensitive CRP ( Figure 1C). Similarly, ROS production showed no difference after the HFM ( Figure 1D).
However, since individual responses might differ, we investigated them and, thereby, identified a subgroup of 10 subjects out of the 17, in whom the FMD decreased after the high-fat meal. Their characteristics are presented in Table 2. Particularly, knowing that baseline FMD and glycemia might influence subsequent FMD modulation by meal ingestion, we investigated their possible differences in the three sets of the study in the 10 selected subjects. Before HFM ingestion, the values of FMD were 10.73 ± 0.95, 8.17 ± 0.92, and 9.45 ± 0.82 in the 10 same subjects while randomly drinking water, juice, and champagne, respectively. Baseline FMD values were significantly lower in the subjects when drinking juice as compared to water (p < 0.05). Before HFM ingestion, the values of glycemia were 5.11 ± 0.17, 5.21 ± 0.10, and 5.19 ± 0.08 in the subjects drinking water, juice, and champagne, respectively. No significant difference was observed between any groups.
On the basis of similar published data, demonstrating the interest of a stratification analysis [27], we present the results observed in this selected subgroup (n = 10 for all parameters except for EPR, n = 5) when drinking water (Figure 2), juice (Figure 3), or champagne ( Figure 4).
FMD Evolution
With juice ingestion, the high-fat meal did not modify the endothelial function and, thus, FMD was not reduced ( Figure 3A).
Biological Effects
The HFM did not affect any biological parameters in the subgroup except triglyceridemia,
FMD Evolution
With juice ingestion, the high-fat meal did not modify the endothelial function and, thus, FMD was not reduced ( Figure 3A).
FMD Evolution
The high-fat meal did not modify significantly the FMD in the subject group drinking champagne (Figure 4 A).
FMD Evolution
The high-fat meal did not modify significantly the FMD in the subject group drinking champagne ( Figure 4A).
Discussion
The main findings of this study are that the high-fat meal significantly and similarly increased
Discussion
The main findings of this study are that the high-fat meal significantly and similarly increased triglyceridemia in the three experimental sets, and that 10 out of the 17 subjects demonstrated a significant FMD decrease when drinking water, but not when drinking either fruit juice or champagne.
Effects of the High-Fat Meal When Drinking Water
Considering the entire group of 17 subjects, the HFM did not result in significant FMD changes despite the increase in triglyceridemia, which is thought to be a major causal factor of endothelial dysfunction [12,13]. A greater FMD decrease might have been observed in a population characterized by cardiovascular risk factors, but we found it to be of interest to assess the potential deleterious effects of this type of meal often eaten by young subjects in whom atherosclerosis and cardiovascular disease might occur later on. Indeed, endothelial dysfunction is interesting to investigate in young subjects since it represent a very early event in the atherosclerosis process.
As shown recently [14], individual variations might occur. We therefore analyzed each subject's responses to the HFM and, interestingly, the FMD decreased significantly in 10 subjects. We will now specifically discuss the data obtained in these 10 subjects characterized by a decrease in FMD after the HFM. Endothelial dysfunction was observed after short-term HFM in experimental animals [28]. Further, FMD has been shown to be reduced after a single HFM in human [29]. Besides the increase in triglycerides and total cholesterol that likely explain the decreased FMD, other parameters, such as oxidative stress, inflammation, and acute hyperglycemia, might be involved [13,[30][31][32][33][34][35]. In our study, the high-fat meal did not induce changes in short-term oxidative stress, as inferred from ROS production determination and the inflammatory status of the subjects-assessed by the plasma ultra-sensitive CRP assay-was not modified. However, acute hyperglycemia is well-known to induce endothelial dysfunction [31][32][33] and although we cannot infer from our results a causal relationship, it is of note that glycemia increased significantly, albeit lightly, in these subjects. Concerning the significant LDL decrease after the HFM, there are few data in the literature. Liu et al. observed similar LDL values after peanut consumption [27]. On the other hand, Dalbani et al. reported that decreased LDL after a fat meal was not associated with a decrease in FMD in male healthy subjects eating tomato paste [36].
Effects of Fruit Juice and Champagne Wine on the Postprandial Endothelial Function after the High-Fat Meal
We choose these beverages in view of their differences in polyphenol content. Fruit juice included red fruits, such as grapes, lingonberry, blueberry, strawberry, and black aronia. It contained a high concentration of total polyphenols with 3.2 grams of gallic acid equivalent per liter of juice (g/L GAE (Gallic acid equivalent)), measured using the Folin-Ciocalteu reagent according to the Singleton method [37]. Due to its fruit composition, it also overwhelmingly contained anthocyanidins and procyanidins, although other classes of compounds were present, such as flavan-3-ols or phenolic acids [38].
Champagne is made from a blend of various grape varieties (Chardonnay, Pinot Noir, and Pinot Meunier). The content of total polyphenols in Champagne wines is between 200 and 300 mg gallic acid equivalent per liter, which is in high concentrations for white wines (50 to 300 mg/L GAE), but remains significantly lower than red wines (0.8 to 4 g/L GAE). Champagne wines mainly contain phenolic acids, particularly caftaric acid as well as some flavonoids [39].
Thus, our study design allowed us to compare the potential effect of different polyphenols types on HFM-induced impairment in FMD. Interestingly, contrary to the reduced FMD observed when drinking water, no significant modification in FMD was observed in the same subjects drinking either juice or champagne. Important biological parameters, such as inflammatory and oxidative stress markers, did not vary between groups and might not be involved in these results. On the contrary, glycemia increased significantly only when the subjects drank water and not juice or champagne. This was not expected and, although glycemia values were similar in the three sets of the study, this might potentially participate in the greater decrease in FMD observed in the 10 subjects when drinking water. Further, based on literature data, it is likely that the polyphenols present in both drinks might have participated in the lack of FMD decrease since they are known to protect the vascular function both in experimental animals and in humans. Alternatively, considering the juice, baseline FMD might have played a role. Indeed, meal-induced decrease in FMD can be reduced in subjects with significantly lower baseline FMD [40].
Limitations of the Study
Although the clinical relevance of the HFM-associated decrease in FMD observed in the 10 subjects drinking water might be discussed, it appeared that a similar reduction (about 2%) in post-prandial FMD might correspond to an 18% increase in cardiovascular events [41]. Additionally, as stated before, these results were obtained in a relatively small sample size of healthy young subjects and further studies will be useful in a larger sample to determine whether the FMD decrease would be greater in subjects without and with cardiovascular risk factors and whether such beverages containing polyphenols might present with protective effects.
Conclusions
This study supported that HFM can impair the endothelial function in healthy young subjects. The decrease in FMD was not observed when the subjects drank fruit juice, rich in anthocyanins and procyanidins, or champagne, rich in simple phenolic acids. Further studies will be useful to confirm these data in a larger population and to determine the mechanisms involved. The responses observed in patients demonstrating cardiovascular risk factors and, therefore, potential greater endothelial dysfunctions after an HFM might also be interesting to investigate.
|
v3-fos-license
|
2023-02-23T15:40:09.978Z
|
2020-09-07T00:00:00.000
|
257085761
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s42003-020-01215-6.pdf",
"pdf_hash": "a001db1d0fa2bc8a460a84d73adf2f9fefe93e9d",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:773",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "a001db1d0fa2bc8a460a84d73adf2f9fefe93e9d",
"year": 2020
}
|
pes2o/s2orc
|
Intracellular ATP levels in mouse cortical excitatory neurons varies with sleep–wake states
Whilst the brain is assumed to exert homeostatic functions to keep the cellular energy status constant under physiological conditions, this has not been experimentally proven. Here, we conducted in vivo optical recordings of intracellular concentration of adenosine 5’-triphosphate (ATP), the major cellular energy metabolite, using a genetically encoded sensor in the mouse brain. We demonstrate that intracellular ATP levels in cortical excitatory neurons fluctuate in a cortex-wide manner depending on the sleep-wake states, correlating with arousal. Interestingly, ATP levels profoundly decreased during rapid eye movement sleep, suggesting a negative energy balance in neurons despite a simultaneous increase in cerebral hemodynamics for energy supply. The reduction in intracellular ATP was also observed in response to local electrical stimulation for neuronal activation, whereas the hemodynamics were simultaneously enhanced. These observations indicate that cerebral energy metabolism may not always meet neuronal energy demands, consequently resulting in physiological fluctuations of intracellular ATP levels in neurons. Akiyo Natsubori et al. use a genetically encoded sensor to measure intracellular ATP levels in mouse cortical excitatory neurons in vivo. They show cortex-wide variations in ATP levels across sleep-wake states, and that cerebral energy metabolism does not always meet neuronal energy demands.
nergy homeostasis is crucial for enabling vital cellular activities. In the brain, such homeostatic mechanisms are often described as "neurometabolic coupling" (NMC) between high-frequency neuronal oscillations that are involved in energy expenditure and hemodynamics or glucose metabolism [1][2][3][4] . These couplings have been widely used for functional brain imaging such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) and could locally function to maintain a constant cellular energy status.
Furthermore, global brain energy homeostasis could also function in a state-dependent manner. Across the sleep-wake states of animals, brain metabolic activities for energy supply such as hemodynamics and glucose metabolism and multiple cellular energy-consuming activities including neuronal firings simultaneously fluctuate over wide brain areas [5][6][7][8] . In the wake state, brain-wide metabolic activities and neuronal activities increased, and in the non-REM sleep state, they decreased [5][6][7][8] . These global parallel dynamics of energy producing and consuming activities across the sleep-wake states could be brought about by the brain monoaminergic systems 5,9 . Under these local and global brain energy homeostatic mechanisms, it is hypothesized that the cellular energy status in the brain could be maintained constant across the sleep-wake states of animals and cellular energy depletion could be prevented in all physiological conditions. However, this has not been experimentally proven.
Adenosine 5′-triphosphate (ATP), the major cellular energy metabolite, is synthesized to meet requirements for many subcellular processes in the brain such as synaptic transmissions, action potentials, and maintaining resting membrane potentials 10 . Based on ex vivo studies, intracellular ATP levels in neurons decrease following high-frequency electrical stimulation and glutamate exposure, as well as in response to pharmacological inhibition of ATP production [11][12][13] . Thus, intracellular ATP concentrations can be regarded as the cellular energy status reflecting both energy supply and consumption. Nevertheless, the currently known mechanisms regulating intracellular ATP levels in neurons are based on ex vivo observations and may, therefore, differ in vivo, as ex vivo conditions do not reproduce local or global brain energy homeostatic mechanisms.
To investigate whether the cellular energy status is maintained constant in the brain of living animals, we conducted in vivo optical measurements of intracellular ATP dynamics in neurons using a genetically encoded fluorescent ATP sensor 11,14 . With fiber photometric recordings, we observed that in vivo intracellular ATP levels in cortical neurons decreased under local electrical stimulation, which was compatible with the results of a previous ex vivo report 11 , whereas hemodynamic responses were also induced in terms of local brain energy homeostatic functions. Furthermore, fiber photometric recordings and wide-field microscopic imaging revealed that global fluctuations in intracellular ATP levels of cortical neurons depend on the sleep-wake states of animals, presumably affected by local and global brain energy homeostatic mechanisms. Simultaneous in vivo recording of neuronal activity and cerebral hemodynamics could help us understand the unique characteristics of intracellular ATP dynamics in neurons and mechanisms of brain energy homeostasis as a whole.
Results
Electrical stimulation diminishes ATP levels in neurons. It was previously shown that ex vivo, high-frequency electrical stimulation led to a decrease in axonal ATP levels, possibly reflecting elevated ATP consumption 11 . However, these observations may not reflect the in vivo conditions, since brain metabolic activities related to energy supply, including blood flow, were also shown to focally enhance following local electrical stimulation 15,16 . To evaluate the effect of local brain energy homeostatic functions to maintain the neuronal energy status constant, we observed in vivo neuronal intracellular ATP responses under local electrical stimulation, compatible with previous ex vivo observations 11 .
To examine ATP dynamics in response to electrical stimulation in vivo, we monitored intracellular ATP levels in layers 5 and 6 of the cerebral cortex (primary motor area) using Thy1-ATeam transgenic mice, in which the genetically encoded ATP indicator, AT1.03 YEMK (ATeam), was expressed in pyramidal neurons under the control of a Thy1 promoter 11 (Fig. 1a). In this mouse line, we recorded the compound intracellular ATP signals, with a fiber photometric system [17][18][19] , as the ratio of fluorescence intensity of yellow fluorescent protein (YFP) to cyan fluorescent protein (CFP) (Y/C ratio), which was based on a Förster resonance energy transfer (FRET) 14 occurring on a timescale of microseconds. As changes in pH can modulate fluorescence from some versions of YFP 20 , we inspected the pH dependency of the FRET signal of purified AT1.03 YEMK constructs 14 . The fluorescence emission ratio was almost invariant at pH conditions >7.3 ( Supplementary Fig. 1). The ratio slightly decreased as the decrease of cellular pH lower than 7.3 when the cytosolic ATP levels were higher than 2.0 mM. These findings suggest that the FRET signal from AT1.03 YEMK would not be heavily corrupted by cellular pH fluctuations [21][22][23] .
During cortical electrical stimulation with the electrode adjacent to the optical fiber (Fig. 1b), intracellular ATP levels significantly decreased at frequencies of 10, 30, 50, and 100 Hz in a stimulation frequency-dependent manner (p < 0.0001, F = 16.99, one-way analysis of variance (ANOVA); n = 6 mice; Fig. 1c, d). Note that our electrical stimulation protocols (stimulation intensity: 100 μA and frequencies: 1-100 Hz) were adopted in reference to in vivo electrical stimulation protocols for inducing neurogenic vascular responses in previous studies 15,16 , whereas stimulation with higher frequencies would induce the neuronal firings whose patterns were beyond the spontaneous ones 24 . To investigate the dynamics of neuronal activity and accompanying cerebral metabolic activity evoked by the electrical stimulation, we monitored intracellular Ca 2+ activities in pyramidal neurons using fiber photometry and measured cerebral blood flow (CBF) using laser Doppler flowmetry (LDF) (Fig. 1e-h). To monitor the Ca 2+ activities in pyramidal neurons, we injected a virus that expresses CaMKII-GCaMP (AAV9-CaMKIIa-jGCaMP7f) to layer 5 of the cerebral cortex (primary motor area). The CaMKII-GCaMP signals were enhanced by local electrical stimulation depending on the stimulation frequency (p < 0.0001, F = 44.91, one-way ANOVA, n = 6 mice; Fig. 1e, f), which is consistent with the results of a previous report 25 . A similar enhancing effect was found regarding CBF (p = 0.0003, F = 7.21, one-way ANOVA, n = 7 mice; Fig. 1g, h). The maximum amplitude of the ATP signal significantly decayed to the end of stimulation (p < 0.001 vs. time of the end of stimulation; Student's t-test with Bonferroni correction) and occurred significantly later compared with those of CaMKII-GCaMP and CBF signals at frequencies of 30, 50, and 100 Hz (p < 0.01 vs. CaMKII-GCaMP and CBF; Student's t-test with Bonferroni correction; Fig. 1i). The ATP signal also showed a significantly longer recovery half-time compared with CaMKII-GCaMP or CBF signals at frequencies of 10, 30, 50, and 100 Hz (p < 0.05 vs. CaMKII-GCaMP and CBF; Student's t-test with Bonferroni correction; Fig. 1j). These results suggest that local electrical stimulation induce long-lasting, negative effects on the energy balance within cortical neurons in a stimulation frequencydependent manner, despite a simultaneous enhancement in CBF. These findings suggest that local brain energy homeostatic functions, including hemodynamics, could not be sufficient to always maintain the neuronal ATP levels constant under the electrical stimulation but could slowly complement stimulationinduced neuronal energy consumption. Considering the classical use of in vivo electrical stimulation methodologies 26,27 , our observation of electrical stimulation-induced long-lasting neuronal ATP reduction could be of interest.
State-dependent variation of neuronal cytosolic ATP levels. We next investigated to what extent intracellular ATP levels within the cortical neurons of live animals may physiologically fluctuate depending on the state they are in using the fiber photometric system. We observed that the ATP signals (Y/C ratio) in the cortex of Thy1-ATeam mice showed slow fluctuations, with frequencies predominantly lower than 0.05 Hz and with durations of convex waves of 30-60 s (Fig. 2a). No waves of the Y/C ratio were observed in control (wildtype) mice. To characterize the intracellular ATP dynamics within cortical neurons, we next monitored ATP signals across the sleep-wake cycles of animals. We simultaneously recorded the CBF using LDF as a metabolic parameter for energy supply, which is controlled by local neuronal activities 28 (Fig. 2b). We observed that ATP levels fluctuated across the sleep-wake cycles and negatively correlated with CBF (Fig. 2c). Intriguingly, ATP levels significantly decreased during rapid eye movement (REM) sleep state compared with wake and non-REM (NREM) sleep state (main effect for state using one-way ANOVA: F(2, 9) = 21.35, p < 0.001; followed by multiple comparisons with the Bonferroni test: p < 0.05 for REM In contrast, CBF increased during REM sleep state (F(2, 9) = 21.63, p < 0.001 and p < 0.05 for REM sleep vs. wake and NREM sleep, respectively, n = 4, the same mice; Fig. 2e), which, in combination with a similar increase in other metabolic activities related to energy supply in this state [6][7][8]29 , suggests a negative energy balance in cortical neurons and reflects a high energy expenditure during REM sleep state. Focusing on changes in ATP signals and CBF during sleep state transitions, ATP levels decreased following the transition from the wake state to NREM sleep state, whereas CBF transiently decreased during transition but was restored afterward (main effect for state using two-way ANOVA: F(11, 33) = 4.73, p < 0.001 for ATP and F(11, 33) = 5.87, p < 0.001 for CBF; followed by multiple comparisons with the Bonferroni test: p < 0.05 vs. the data of the first and fourth epochs; n = 4 mice; Fig. 2f(i). During transitions from NREM sleep state to wake state, ATP signals increased, whereas CBF transiently increased and then significantly decreased (F(11, 33) = 5.78, p < 0.001 for ATP and F(11, 33) = 8.24, p < 0.001 for CBF; followed by multiple comparisons with the Bonferroni test: p < 0.05 vs. the data of the first and fourth epochs; n = 4 mice; Fig. 2f(ii). When transitioning from NREM to REM sleep, ATP signals gradually decreased, whereas CBF increased (F(11, 33) = 10.62, p < 0.001 for ATP and F(11, 33) = 5.13, p < 0.001 for CBF; followed by multiple comparisons with the Bonferroni test: p < 0.05 vs. the data of the first and fourth epochs; n = 4 mice; Fig. 2f(iii). In contrast, at the transition from REM sleep state to the wake state, ATP signals promptly increased, whereas CBF gradually decreased (F(11, 33) = 9.63, p < 0.001 for ATP and F(11, 33) = 10.56, p < 0.001 for CBF; followed by multiple comparisons with the Bonferroni test: p < 0.05 vs. the data of the first and fourth epochs; n = 4 mice; Fig. 2f(iv). These changes were accompanied by changes in electroencephalographic (EEG) parameters (delta power and theta frequency) and electromyographic (EMG) activity, which match the defining features of the sleep and wake states.
As the intracellular ATP levels in cortical neurons decreased during the transitions from the wake state to NREM sleep state and from NREM to REM sleep states, we assumed that they may be dependent on the depth of NREM sleep and/or sub-state of REM sleep between the tonic and phasic components. However, the temporal cross-correlation analysis revealed no obvious correlation between ATP signals and EEG delta power during NREM sleep state or theta frequency during REM sleep state 30,31 ( Supplementary Fig. 2). These data indicate that the intracellular ATP levels of cortical neurons do not correlate with the depth of NREM sleep or sub-state of REM sleep.
To further assess intracellular ATP dynamics in cortical neurons in response to different states, we used isofluranemediated anesthesia, which triggers a state of unconsciousness that shares striking similarities with those during NREM sleep, regarding the activities in the cerebral cortex and involved regions 32,33 . Isoflurane treatment reversibly decreased intracellular ATP levels in cortical neurons (F(2, 8) = 21.24, p < 0.001, main effect for isoflurane using two-way ANOVA; followed by multiple comparisons with the Bonferroni test, p < 0.05; pre vs. iso and iso vs. post; n = 5 mice; Supplementary Fig. 3), which was accompanied by alterations in EEG and EMG activities. This reduction in ATP levels was comparable with those during REM sleep and following electrical stimulation at 50 Hz ( Supplementary Fig. 3B). General anesthesia agents, including isoflurane, diminish global neuronal and glial activities in the cortex 31,34 . These suggest that isoflurane-mediated anesthesia results in a negative energy balance in neurons, which could reflect the suppression of ATP production related to energy supply that exceeds the reduction in neuronal activity and, therefore, energy consumption 35,36 .
Rapid changes in neuronal ATP levels between sub-states. Given our finding that intracellular ATP levels of cortical neurons show fluctuations within the sleep-wake states and at transitions between these states (Supplementary Fig. 4), we next examined these fluctuations in more detail. During the wake state, two distinct sub-states are defined: quiet-awake (non-movement) and active-awake (locomotion or other movements) 37 . As these substates are associated with distinct neuronal activities in the cortex 37 , intracellular ATP levels within these cells may also vary between sub-states. To address this possibility, we evaluated neuronal ATP dynamics at the transition from quiet-to activeawake, which is characterized by the onset of EMG activity. ATP signals increased and peaked within a few seconds after the transition from quiet-to active-awake, and remained elevated thereafter (Fig. 3a, b). CBF immediately increased at the transition, reaching its highest level preceding the peak in ATP concentrations, and following a transient drop resumed a level almost below that during quiet-awake. The transition between sub-states was confirmed by EEG gamma power 37 , the increase of which was detectable prior to the enhancement of CBF and EMG activities (Fig. 3a).
As we also observed fluctuations in ATP signals and CBF during NREM sleep state ( Supplementary Fig. 4), we next assessed these parameters at micro-awakenings during NREM sleep state 38 . ATP signals transiently increased following a brief elevation in EEG gamma power and short-term EMG activity (Fig. 3c, d). During ATP increase, CBF increased concomitantly with the rise in EMG activity, followed by a transient decrease Fig. 1 Local electrical stimulation evokes intracellular ATP response in cortical neurons in vivo. a Schematic illustration of the fiber photometric system. For the recording of Thy1-ATeam mice, the fluorescence emission is separated by a dichroic mirror. Yellow and cyan fluorescence signals are corrected by band-pass filters and enhanced by photomultiplier tubes (PMTs). For the recording of CaMKII-GCaMP, one light pathway for fluorescence emission and PMT was used. Dashed white line, optical fiber. LED, light-emitting diode. Scale bar = 200 μm. b Schematic illustration of optical fiber with a stimulation electrode implanted in the cortex for Thy1-ATeam or CaMKII-GCaMP monitoring. To monitor cerebral blood flow (CBF), the laser Doppler flowmetry (LDF) probe is positioned over the skull where the electrode is implanted. Scale bar = 1 mm. c ATP signals during local electrical stimulation at different frequencies. Signals are normalized between one (mean of 5 s before stimulation) and zero (trough value after 50 Hz stimulation) in each animal and presented as mean (n = 6 mice). d ATP signals represented as area under the curve (AUC). *p < 0.05 and **p < 0.01; one-way ANOVA, Bonferroni test (AUC value for 5 s before stimulation vs. that for 60 s after stimulation; n = 6 mice), mean ± SEM. e CaMKII-GCaMP signals during local stimulation. Signals are normalized between zero (mean of 5 s before stimulation) and one (peak value after 50 Hz stimulation) in each animal and presented as mean value (n = 6 mice). f The AUC of CaMKII-GCaMP signals following stimulation. g CBF during local stimulation. Normalization as in (e) (n = 7 mice). h CBF response as AUC. i Maximum response time of Thy1-ATeam, CaMKII-GCaMP, and CBF following stimulation. **p < 0.01 vs. CaMKII-GCaMP; † † p < 0.01 vs. CBF; ## p < 0.01 vs. time of the end of stimulation; Student's t-test with Bonferroni correction, mean ± SEM. j Recovery half-time of Thy1-ATeam, CaMKII-GCaMP, and CBF following stimulation. *p < 0.05 and **p < 0.01 vs. CaMKII-GCaMP; † p < 0.05 and † † p < 0.01 vs. CBF; Student's t-test with Bonferroni correction. below the baseline before its level normalized. Interestingly, unlike CBF, ATP levels did not drop below the baseline following the increase, suggesting that prompt adjustment of metabolic activities enabled timely energy supply and a positive energy balance in neurons.
These temporal relationships between the dynamics of ATP levels and CBF during the wake and NREM sleep states were confirmed by temporal cross-correlation analysis ( Supplementary Fig. 5). ATP signals showed a positive correlation peak with a positive time lag (a few seconds) in the wake and NREM sleep states, and a negative correlation peak with a negative time lag (a few seconds) to CBF in the wake state. This biphasic correlation between the ATP levels and CBF was not observed in REM sleep state. These data support our findings that CBF increased a Wake-to-NREMS b NREMS-to-Wake c NREMS-to-REMS d REMS-to-Wake preceding the peak in ATP levels and following a transient drop in the wake and NREM sleep states (Fig. 3).
Cortical synchronization of neuronal cytosolic ATP dynamics.
Since neuronal activity within the cortex fluctuates across the sleep-wake states locally as well as globally 39,40 , we wondered whether the observed dynamics in ATP levels between these states is cortex-wide or region-specific. We monitored the intracellular ATP dynamics over a broad area of the cerebral cortex using wide-field imaging in Thy1-ATeam mice (Fig. 4, Supplementary Movie 1-3). Remarkably, intracellular ATP levels changed across the sleep-wake states throughout the monitored area and significantly decreased during REM sleep state (Fig. 4a, b and Supplementary Fig. 6) (main effect for state using one-way ANOVA: F(2, 12) = 81.04, p < 0.001; followed by multiple comparisons with the Bonferroni test: p < 0.05, REM vs. wake and NREM sleep, n = 5 mice; Fig. 4b), which is consistent with the above-described observations. To analyze the spatio-temporal ATP dynamics, we used the six cortical modules as regions of interest (ROIs) (Fig. 4c). As expected from the findings above, state-dependent ATP dynamics were observed in all ROIs (main effect for state using two-way ANOVA: F(2, 9) = 880.3, p < 0.001; followed by multiple comparisons with the Bonferroni test: p < 0.05, REM sleep vs. wake and NREM sleep, n = 6 ROIs; Fig. 4d). At the transitions between states, intracellular ATP levels in cortical neurons synchronously changed in all ROIs (Fig. 4e, Supplementary Movie 1-3). To elucidate the characteristics of synchronization in cortical ATP dynamics, we next assessed the correlation in ATP dynamics between the ROIs across and within the states. Across the wake, NREM sleep, and REM sleep states, the ATP dynamics were highly synchronized among all cortical regions (Fig. 4f). Within each state, the ATP dynamics showed the highest correlation in symmetric ROIs between the right and left hemispheres (Fig. 4g). In one cerebral hemisphere, the cortical ATP dynamics showed higher correlations in the adjacent ROIs compared with non-adjacent ROIs. These characteristics of synchronization in cortical ATP dynamics were consistently observed during the wake, NREM sleep, and REM sleep states, and their averaged correlation intensity among ROIs was not significantly altered across the three states (main effect for state using one-way ANOVA: F(2, 9) = 0.86, p = 0.46, n = 4 mice; Fig. 4h).
Discussion
The present study showed that intracellular ATP levels in cortical excitatory neurons fluctuate under physiological conditions depending on the sleep-wake states of animals. Neuronal ATP levels increased simultaneously with the level of the arousal of animals, whereas they profoundly decreased in an REM sleepspecific manner. These in vivo observations were achieved by genetically encoded ATP sensors recently developed and adopted 12,14,23,[41][42][43][44][45][46][47] , as well as fiber photometry and wide-field microscopy, which was originally used for in vivo monitoring of neuronal calcium activities 19,48 . Our data demonstrate the superiority of these techniques over previously used, conventional methodologies such as chemiluminescence imaging by luciferase 49 , which lack temporal and spatial resolutions, are sensitive to a high pH, and importantly, are restricted to in vitro/ ex vivo application. Our most surprising finding was that intracellular ATP levels in cortical excitatory neurons decrease during REM sleep state (Fig. 2). Even though this sleep state is known to be crucial in animals including humans, cortical neuronal activities related to REM sleep and corresponding functions have not been elucidated. The reduction in ATP concentrations in cortical excitatory neurons during REM sleep state indicates a negative energy balance in these cells, whereas the CBF and glucose metabolism for energy supply were simultaneously enhanced [6][7][8]29 . These observations suggest an REM sleep-specific enhancement of energyconsuming activities and/or disturbance of ATP production in cortical neurons. A rise in energy consumption related to neuronal firing activity seems unlikely, as this was reported to be decreased in cortical pyramidal neurons during REM sleep state 48,50 . As another characteristic of metabolic activity during 51 , suggesting that anaerobic glycolysis could be promoted in neurons and/or astrocytes during REM sleep state, whose predominance should be the cause of inefficient ATP production. Furthermore, it is assumed that metabolic processes involved in ATP production such as oxidative phosphorylation may markedly decelerate during REM sleep state. In the last step of oxidative phosphorylation, brain thermogenesis likely occurs via the activation of uncoupling proteins, which could block ATP synthesis 52,53 . Interestingly, previous studies have reported an increase in brain temperature during REM sleep state [54][55][56] , thus supporting our hypothesis that active metabolic heat production could cause the reduction in ATP levels during this state 5 . Our technique and data presented may help unravel the importance of REM sleep and its implications for the brain in future studies.
We found that intracellular ATP levels in cortical neurons increased at the onset of the wake state upon transition from quiet-to active awake, as well as following micro-awakening during NREM sleep state, and in all cases was preceded by CBF increase (Figs. 2, 3). These findings indicate a correlation between intracellular ATP concentrations and the level of arousal of animals caused by a rise in energy production in response to increased activity. Focusing on ATP dynamics within the different states, a transient peak in ATP signals was observed a few seconds following the transition of quiet-to active-awake and after the event of micro-awakening, subsequent to an increase in EEG gamma power and CBF (Fig. 3). The coupling between cortical gamma oscillation and induced CBF enhancement via glutamatemediated synaptic transmission is known as NMC [1][2][3][4]28 . This mechanism may explain the transient increase in intracellular ATP levels in cortical neurons, EEG gamma activity, and CBF, by allowing a transient excess in energy supply, and therefore positive energy balance in neurons during the wake and NREM sleep states. Interestingly, a time lag of 1-2 s in the ATP peak after the CBF peak (Fig. 3) was consistent with a time lag in the increase of tissue PO 2 after the CBF increase, which could be caused by the time required for oxygen to diffuse through the tissue for ATP production 57 .
Our wide-field imaging further revealed that the intracellular ATP dynamics in neurons showed cortical synchronization across the sleep-wake states (Fig. 4), reflecting cortex-wide neuronal activities and hemodynamics 48,58 . It is imaginable that these state-dependent cortex-wide fluctuations of cellular energy status could be affected by volume transmission of neuromodulators including of norepinephrine and acetylcholine, as these were shown to be released in a cortex-wide manner depending on the arousal state of animals 59,60 . Furthermore, the signals of neuromodulators regulate cerebral metabolism as well as neuronal and glial cellular activities [61][62][63][64][65] . A general cortex-wide synchronization in ATP dynamics was also found within each state. In more detail, however, a relatively higher correlation was observed between symmetrical cortical regions and neighboring regions, whereas correlations were slightly lower between ROIs further away from each other. This region-specificity of cortical ATP dynamics may be related to the functional connectivity of networks and the underlying neuronal activities. In this case, it would be intriguing to test whether ATP dynamics would be enhanced/changed in their regional specificity in response to certain behavioral activities or sensory stimulation of animals.
Neurons express ATP-sensitive potassium channels (K ATP channels) that are part of the machinery that enables electrical excitability 66 . However, since little is known about in vivo intracellular ATP dynamics in neurons, the specific functions and characteristics of these channels under in vivo physiological as well as pathological conditions have not been fully elucidated. We discovered sleep-wake state-dependent fluctuations of intracellular ATP levels in neurons, which strongly supports the assumption that such levels directly regulate neuronal firing activity via K ATP channels under physiological conditions. Furthermore, the physiological intracellular ATP dynamics we observed provides strong supporting evidence to previous reports regarding the effect of brain energy metabolism on neuronal activity, and ultimately, on a myriad of brain functions such as memory formation and maintenance 67 .
Despite the power of the experimental setup used in this study, it still has technical limitations, namely the difficulty in calibrating the ATP-sensor signal to absolute cytosolic concentrations in vivo. Therefore, further technical advances will be needed for more precise measurements of the absolute concentration of neuronal intracellular ATP. As a consequence, we currently do not know whether the in vivo intracellular ATP concentration in neurons is the same as that ex vivo, which has been estimated at approximately 2 mM 21,23,45 . Nevertheless, the here measured ATP levels following electrical stimulation, which are within the dynamic range of the ATeam sensor with its dissociation constant (K d ) of 1.2 mM 14 , are likely to be comparable with the ATP response found ex vivo in this context using ATeam 11 .
In order to examine brain activities and functions, neuronal electrical activities and coupled brain metabolic activities (via fMRI and PET imaging) have been utilized. In contrast, the in vivo intracellular ATP levels in cortical neurons observed in this study showed distinct dynamics across the sleep-wake states that differ from those found using the above conventional methodologies. We also observed that intracellular ATP levels decreased upon local electrical stimulation, triggering a rise in both energy supply and consumption, whereas general anesthesia had a similar effect on ATP levels but simultaneously inhibited energy supply and consumption 32,34-36 ( Fig. 1 and Supplementary Fig. 2). Taken together, the intracellular ATP levels, which could be regarded as the cellular pool of energy, can be useful for understanding physiological and pathological processes in the brain as well as in the body of live animals, such as hibernation, Fig. 4 Wide-field optical imaging of intracellular ATP dynamics in cortical neurons. a Representative ATP signals, with EEG and EMG signals, in the whole cortical area across the sleep-wake states imaged in one example mouse. An imaging window was placed over the cortex. Arrows indicate corresponding time points of imaging at the bottom. A: anterior, P: posterior, R: right, L: left. Scale bar = 1 cm. b Mean ATP signals in the observed cortical area during the wake, non-REM sleep (NREMS), and REM sleep (REMS). *p < 0.05, one-way ANOVA with the Bonferroni post-hoc test. Data are presented as mean ± SEM (n = 5 mice). c Circles depict the six ROIs in which ATP signals were analyzed. RA: right-anterior, RM: right-middle, RP: right-posterior, LA: left-anterior, LM: left-middle, LP: left-posterior. Scale bar = 1 cm. d Mean ATP signals during the wake, NREMS, and REMS states in six ROIs from one example mouse. Colors correspond to ROI circles in (c). *p < 0.05, two-way ANOVA with the Bonferroni post-hoc test (n = 6 ROIs). e ATP signals in six ROIs from one animal for the transitions of NREMS-to-REMS and REMS-to-wake. Transitions occurred at 0 s. Data are from 4-s intervals characterized by state transitions. f Correlation map between six ROIs across the sleep-wake states. The data were averaged from four mice. g Correlation maps between six ROIs in the wake, NREMS, and REMS states. Note that higher correlations were observed between hemisphere-symmetric ROIs (RA-LA, RM-LM, and RP-LP) and neighboring ROIs (e.g., RA-RM or LM-LP). Data are averaged from four mice. h The comparison of correlation coefficients among states. Data are presented as mean ± SEM (n = 4 mice). See also Supplementary Movie 1-3 and Supplementary Fig. 6. fatigue, metabolic disorders including ischemia, and degenerative diseases.
Methods
Animals. All animal procedures were conducted in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and were approved by the Animal Research Committee of Tokyo Metropolitan Institute of Medical Science (approval No. 16017) and Tohoku University (approval No. 2018LsLMO-020). All efforts were made to minimize animal suffering or discomfort and to reduce the number of animals used. Experiments were performed using 3-to 12-month-old male and female mice. Thy1-ATeam transgenic mice 11 were used for ATP recording and wildtype mice were used for calcium or CBF recordings under the electrical stimulation. The genetic background of all transgenic and wildtype mice was C57BL6J. Mice were housed under controlled lighting (12 h light/dark cycle) and temperature (22-24°C) conditions. Food and water were available ad libitum.
Surgical procedure. Stereotaxic surgery was performed under anesthesia with a ketamine-xylazine mixture (100 and 10 mg/kg, respectively, i.p.) or with isoflurane using a vaporizer for small animals (Bio Research Center). For fiber photometric recordings, an optical fiber cannula (CFMC14L05, ⌀ 400 μm, 0.39 NA; Thorlabs) was unilaterally implanted into the left primary motor cortex (+1.1 mm anteroposterior, +1.5 mm mediolateral from bregma, -1.2 mm dorsoventral from the skull surface, according to the Mouse Brain Atlas 68 ). For local electrical stimulation, an optical fiber cannula attached to 50-μm diameter tungsten-wires (#2016971, California Fine Wire) was implanted. For virus injection, 33 gauge syringe needles (Hamilton) were used for infusion of AAV9-CaMKIIa-jGCaMP7f (3.0 × 10 13 vg per mL) into the left primary motor cortex and a 0.2 μL viral solution was infused at a rate of 0.02 μL min −1 . For CBF measurements, the skull over the contralateral cortical hemisphere was exposed and sealed with silicone impression material (Shofu Inc.) until the beginning of the experiment. To fix the heads of mice, a U-shaped plastic plate was attached to the skull using dental cement (Fuji lute; GC Corporation) to enable its fixation to the stereotaxic frame during recordings.
For wide-field optical imaging, the skull over the entire cerebral hemispheres was exposed and sealed with an ultraviolet-curing jelly nail. To fix the head of mice under the microscope, a stainless chamber frame (Narishige) was attached to the skull using dental cement to enable its fixation to the stereotaxic frame during recordings.
Electrodes for EEGs and EMGs were implanted on the skull over the frontal cortex and neck muscles, respectively, with their reference electrode being implanted on the skull over the cerebellum. The mice were then housed separately for a recovery period of at least 5 days.
Fiber photometry. A fiber photometric system designed by Olympus Engineering (custom-made) was used to detect the compound intracellular ATP or calcium dynamics 19 (Fig. 2a). For ATP signal recordings, the input light (435 nm; silver-LED, Prizmatix) was reflected off a dichroic mirror (DM455CFP; Olympus), coupled into an optical fiber (M41L01, ⌀ 600 μm, 0.48 NA; Thorlabs) linked to a next optical fiber (M79L01, ⌀ 400 μm, 0.39 NA; Thorlabs) through a pinhole (⌀ 600 μm), and then delivered to an optical fiber cannula (CFMC14L05; Thorlabs) implanted into the mouse brain. Light-emitting diode (LED) power was <200 μW at the fiber tip. Emitted yellow and cyan fluorescence light from ATeam was collected via an optical fiber cannula divided by a dichroic mirror (DM515YFP, Olympus) into cyan (referred as CFP; 483/32 nm band-pass filters, Semrock) and yellow (referred to as FRET; 542/27 nm band-pass filter; Semrock), and detected by two separate photomultiplier tubes (H7422-40; Hamamatsu Photonics). For Ca 2+ recording, input light (475 nm; silver-LED, Prizmatix) and dichroic mirrors (DM490GFP; Olympus and FF552-Di02-25×36; Semrock) were applied to detect GCaMP fluorescence (495-540 nm band-pass filter; Olympus). The fluorescence signals were digitized by a data acquisition module (NI USB-6008; National Instruments) and recorded by a custom-made LabVIEW program (National Instruments). Signals were collected at a sampling frequency of 1 kHz. The recordings were carried out under habituated head-fixed conditions in the latter half of the light phase.
Wide-field optical imaging. Wide-field imaging of ATP signals in vivo was performed using a macro-zoom fluorescence microscope (MVX-10, Olympus), with a high power LED (X-Cite 120LED, Lumen Dynamics Group Inc., wavelengths 385 nm and 430 nm) and an objective lens (1× MVX Plan Apochromat Lens, NA 0.25, Olympus). The microscope was equipped with a reverse dichroic mirror (U-MCFPHQ, Olympus), and the emission light was separated by a splitter (W-VIEW GEMINI, Hamamatsu Photonics) with band-pass filters (FF01-550/49-25 and FF01-496/20-25, Semrock) for the yellow and cyan light, respectively. For the recording of the entire hemisphere, a rectangular region (12 mm × 9 mm) was imaged at 320 × 240 pixels, at a focus set at 0.75 mm in depth from the top of the cortical surface. Images were acquired using the HCImage Live software at a frame rate of 1 Hz (exposure time: 500 ms) with a cooled CCD camera (ORCA-Flash4.0 V3, Hamamatsu Photonics). The recordings were carried out under habituated head-fixed conditions in the first half of the light phase.
EEG, EMG, and CBF recordings. EEGs and EMGs were always monitored during ATP measurements. During fiber photometric recordings, EEG and EMG signals were amplified (Model 3000, A-M systems), filtered, and digitized at 1 kHz using an analog-to-digital converter (USB-6008, National Instruments). EEG signals were high-pass-and low-pass-filtered at 0.1 Hz and 300 Hz, respectively. EMG signals were high-pass-and low-pass-filtered at 1 Hz and 300 Hz, respectively. The data acquisition software was written with LabVIEW (National Instruments). Simultaneously with the fiber photometric recordings, CBF was monitored using laser Doppler flowmetry (LDF; wavelength 780 nm; ATBF-LN1; Unique Medical) with a 0.5-mm needle probe attached to the skull over the contralateral cortical hemisphere, based on the CBF symmetry 69,70 (Fig. 3b). CBF signals were digitized at 1 kHz using an analog-to-digital converter (USB-6008).
During wide-field imaging, EEG and EMG signals were amplified (DAM50, WPI), filtered, and digitized at 1 kHz using an analog-to-digital converter (Micro1401-3, CED). EEG and EMG signals were high-pass-and low-pass-filtered at 0.1 Hz and 300 Hz, respectively. The data were recorded using the Spike2 software (CED).
Cortical electrical stimulation. A pulse generator with a constant current output (SEN-7203 and SS-202J, Nihon Kohden) delivered a square pulse (0.1-ms pulse width, 100 μA) at each of five frequencies (1 Hz for 5-s trains, 10, 30, 50, and 100 Hz for 3-s trains). Each animal received eight pulse trains at each frequency during the wake state, judged by online EEG/EMG recordings.
Data analysis
Vigilance state determination. EEG/EMG recordings were automatically scored offline as wakefulness, NREM sleep, or REM sleep state using the SleepSign software version 3 (Kissei Comtec) in 4-s epochs according to standard criteria 71,72 . All vigilance state classifications assigned by SleepSign were examined visually and corrected if necessary. The same individual, blinded to genotype and the experimental condition, scored all EEG/EMG recordings. For detecting the onset of active-awake in the wake state and micro-awakening in the NREM sleep state, the time point at which EMG activity was above a threshold value (mean greater than 10 times the standard deviation) was obtained within each major state.
Fiber photometric and CBF data analysis. Under the electrical stimulation, ATP signals (a ratio of YFP (FRET) and CFP fluorescence intensity), CaMKII-GCaMP, and CBF signals were normalized between one (mean of 5 s before stimulation) and zero (trough value after 50 Hz stimulation) in each animal. Across the sleep-wake states, ATP signals were determined using a curve-fitting procedure and filtered with a Hamming window. CBF signals containing artifacts were removed and EMG signals were also filtered with a Hamming window. For the comparison of ATP/ CBF levels across the states, the data immediately before and after the state transitions for each three consecutive epochs (12 s) were excluded from the analysis. Within each state, ATP/CBF signals (S) were normalized calculating the Z-score as (S-S mean )/S SD , where S mean and S SD were the mean and standard deviation values of the wake state. Sleep-wake state transitions were defined as four consecutive epochs (16 s) of one state followed immediately by eight consecutive epochs (32 s) of a distinct state. Theoretically, six types of transitions could exist: wake-to-NREMS, wake-to-REMS, NREMS-to-wake, NREMS-to-REMS, REMS-to-wake, and REMSto-NREMS. In practice, no animal displayed a 12-epoch sequence meeting the criteria for the REMS-to-NREMS or wake-to-REMS transitions, and thus data were only processed from the other four transition types. EEG delta power (1-4 Hz) was processed as raw values, and EEG theta frequencies (5-10 Hz) were divided by the delta frequencies for standardization. EMG activity was filtered with a Hamming window. Within the identified transitions, ATP and CBF signals, EEG parameters, and EMG activity were normalized calculating the Z-score as (S-S mean )/S SD , where S mean and S SD were the mean and standard deviation values of the first three epochs of the wake-to-NREMS transition. All Z-scored data were averaged across all identified transitions exhibited by each animal. These average curves were subjected to two-way ANOVAs with epoch numbers as a within-subjects measure. For the onset of active-awake in the wake state and micro-awakening in the NREM sleep state, ATP/CBF signals and EMG activity averaged across all identified onsets exhibited by each animal were performed the adjustment of a zero point with the mean values of one epoch (4 s) immediately before the onset-time.
Imaging data analysis. Image analysis was performed using the ImageJ software (https://imagej.nih.gov/ij/). The original 512 × 512-pixel images were reduced to 64 × 64 pixels by binning. Regarding Fig. 4a, b, we calculated the Y/C ratio based on the complete 64 × 64 image dataset. The ATP signal was calculated by means of the fluorescence intensity of YFP (FRET) and CFP. Six circular ROIs were symmetrically placed across the entire cerebral cortex (Fig. 4c). Within each state, ATP signals were normalized calculating the Z-score as (S-S mean )/S SD , where S mean and S SD were the mean and standard deviation values of the wake state to compare the ATP levels across the state. Sleep-wake state transitions were defined as four consecutive epochs (16 s) of one state followed immediately by eight consecutive epochs (32 s) of a distinct state. Within the identified transitions, ATP and signals were normalized calculating the Z-score as (S-S mean )/S SD , where S mean and S SD were the mean and standard deviation values of the first three epochs in a stage immediately before the transition. Cross-correlation was computed using the mean ATP signal of the ROIs. For the cross-correlation analysis, the data immediately before and after the state transitions for each 4 s were excluded from the analysis. All data analyses were performed using MATLAB (MathWorks).
Statistics. Statistics were calculated using MATLAB (MathWorks). The power spectral data of the Y/C ratios were obtained by a wavelet analysis. Two-sample comparisons were performed by paired t-test or Student's t-test. Multiple group comparisons were performed by repeated-measures ANOVA followed by Bonferroni post-hoc tests. The data with error bars are mean ± SEM in each graph, unless otherwise stated. All tests used are specified in the figure legends or text.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2011-10-12T00:00:00.000
|
15719318
|
{
"extfieldsofstudy": [
"Medicine",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/j.str.2011.08.006",
"pdf_hash": "025621650c4195cd97ecaae177df7293e1452cb1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:774",
"s2fieldsofstudy": [
"Chemistry",
"Biology",
"Computer Science"
],
"sha1": "025621650c4195cd97ecaae177df7293e1452cb1",
"year": 2011
}
|
pes2o/s2orc
|
A new generation of crystallographic validation tools for the protein data bank.
This report presents the conclusions of the X-ray Validation Task Force of the worldwide Protein Data Bank (PDB). The PDB has expanded massively since current criteria for validation of deposited structures were adopted, allowing a much more sophisticated understanding of all the components of macromolecular crystals. The size of the PDB creates new opportunities to validate structures by comparison with the existing database, and the now-mandatory deposition of structure factors creates new opportunities to validate the underlying diffraction data. These developments highlighted the need for a new assessment of validation criteria. The Task Force recommends that a small set of validation data be presented in an easily understood format, relative to both the full PDB and the applicable resolution class, with greater detail available to interested users. Most importantly, we recommend that referees and editors judging the quality of structural experiments have access to a concise summary of well-established quality indicators.
, related to Figure 2. Outer contours of Ramachandran plots for specific amino acid categories; in both panels, the general-case contours are shown as wider lines (dark blue and purple). (a) Overlapped contours for each of the 16 amino acid types that are included in the "general" distribution (see Fig. 2B) because they match quite well; 98% contours are in dark blue, 99.95% contours in purple. (b) Overlapped contours for the 6 categories recommended by the VTF (Gly in green, trans-Pro in gold, cis-Pro in red, pre-Pro in black, Ile/Val in cyan, and general in wider dark blue and purple), proposed for separate evaluation because they are each very different. Median, quartile, and extreme percentile clashscore values, for non-overlapping bins covering exact tenth Å resolutions only (red) and in-between resolutions (blue). Entries reporting exact tenth Å resolution values score consistently somewhat worse (higher clashscores). Figure S3, related to Figure 3. All-PDB (X-ray, since 1990) distribution of validation criteria as a function of resolution. Median and quartile levels are plotted smoothly, along with all individual data points for outlier structures beyond the 1st percentile (poor; red) or the 99th percentile (good; blue) values. (See supplementary material for detailed criteria, and for procedures and discussion of these shingle-smoothed, quartile-and-outer-percentile plots with outlier datapoints. At the right of each panel is the resolution-independent, one-dimensional distribution (green line) with median, quartile, and outer percentile values marked, for the aggregated set of all PDB entries. (A) Percent poor rotamers. (B) Fraction of buried hydrogen bond donors or acceptors that are unsatisfied.
Additional recommendations to the Worldwide Protein Data Bank
As requested by wwPDB, the X-ray VTF has made recommendations about the components and product of the validation pipeline that will be a part of the new deposition and annotation tool currently being developed by the wwPDB partner sites. In addition, the VTF would like to make the following, related recommendations: On wwPDB web sites, the front page for any PDB entry should provide users with an intuitive indication of the global quality of the entry by the key criteria.
Depositors should be urged to include enough information to reproduce the refinement using the deposited coordinates and structure factors. With present technology, this would include cross-validation flags, non-crystallographic symmetry (the definitions of the atoms related by NCS and the target RMSDs), wavelength(s) of data collection, identification of the restraint library and any extra restraints, solvent model, model for atomic displacement parameters (including, if appropriate, TLS parameters and anisotropic U-values), H atom model (if refined but not deposited), identification of refinement target, twinning status and (if relevant) description of twinning. As techniques advance, other information may be required.
The validation process should be automated as much as possible, so that depositors can freely upload revised coordinates for validation, without increasing the workload of the core PDB staff. There should be a clear "test pathway" for validation, in which structures can be validated outside of the deposition pathway. Data submitted to the test pathway should be deleted upon completion of the validation computations.
Both global and per-residue validation data should be provided on the wwPDB web sites in a machine-readable format, which will allow users to compare overall quality of related structures and to view annotations of local quality criteria in the context of either sequence or structure, using compliant molecular display programs. Figure 6C shows a possible representation of per-residue validation data as a scrollable plot.
The validation criteria, including algorithms and cutoff values, should be reviewed regularly by a successor to the current Validation Task Force. We suggest that a five-year cycle would be sufficient to keep up with advances in understanding of structure and validation methodology.
Experimental Procedures
The primary validation criteria were chosen to cover the complementary aspects of experimental data, model-to-data match, geometry, conformation, and packing quality. Preference was given to criteria with a history of broad application, and it was required that freely usable, well documented software for their calculation be available. Each of the key criteria was calculated for all relevant PDB x-ray structures, and the extreme outliers were examined to ensure that they generally identified real problems. In several cases, this process resulted in removal or replacement of bad outlier entries. Since a disproportionate number of the outlier entries are early depositions (< 1% of the entries, but > 30% of the worst outliers on each criterion), the final reference sets of data are here taken only from 1990 onward. Other filters apply for some criteria, such as a minimum size for underpacking and proteinonly for unsatisfied H-bonds or Ramachandran outliers. The all-PDB datasets used are very similar but differ slightly between the key validation criteria since they were done by different people; they vary from about 47,000 structures for R free to about 52,000 structures for clashscore. Most measures of model accuracy correlate strongly with resolution, but RSR-Z scores are already normalized to be resolution-independent. Bond-length outliers show a flat distribution, and bond-angle outliers are only slightly correlated with resolution.
The RMS-Z score is defined as the root-mean-square value of the Z-scores for a particular criterion; the Z-score, in turn, is defined as the deviation from the mean or expected value, divided by the estimated standard deviation. The RMS-Z score is thus a dimensionless quantity, calibrated to reflect the amount of variation expected in each validation measure. Typically, the Z-score is computed using the population standard deviation, where the population can be the entire PDB, structures at similar resolution or (in the case of bond lengths, bond angles and planarities) the set of small molecule structures examined by Engh and Huber (1991;2001).
Most validation criteria can be satisfied more easily as the resolution of the diffraction data increases, so that the mean values of the criteria vary significantly with resolution. In order to evaluate the quality of a structure relative to what could be expected for the available data, it is necessary to account for the influence of resolution, most readily by comparison with a set of structures determined at similar resolution. There is a trade-off between choosing a sufficient number of structures for comparison, to reduce statistical error, and choosing too wide a range of resolution, over which there would be real variation. In a previous study, we found that as few as 400 structures at similar resolution could be used to compute the mean and standard deviation of validation criteria (Read and Kleywegt, 2009); at most resolution limits, this requires only a narrow range of resolutions. However, some smoothing is required to avoid the resolution roundoff artefacts documented in Figure S2.
Selecting cutoff values for scoring or listing outliers is not an exact process, but the cutoffs have a strong influence on the usefulness of validation reports. The cutoffs recommended here were guided by validator and user experience with each individual measure. The optimal cutoff value should flag a large fraction of the real problems, but including a significant number of false positives is counterproductive. The cutoffs should be reassessed periodically to balance those two criteria and may need to vary with resolution or molecule type.
Categories for Ramachandran validation
Categories for Ramachandran validation were chosen according to which amino acids had very similar ( Figure S1a) or very different ( Figure S1b) contours at the 98% and 99.95% levels used for validation. These distributions were made from a MySQL (MySQL, 2006) database containing PDB and validation data for over a million residues from 4400 nonhomologous chains (at the 70% sequence identity level), chosen for resolution < 2.0Å and an average of resolution and MolProbity score (Chen et al., 2010) of < 2.0. Individual residues were omitted if they had occupancy < 1.0 or any backbone B-factor > 30. Contours were produced as kernel plots with density-dependent smoothing (Lovell et al., 2003); a contour described as 99.95% means that 99.95% of the filtered data is enclosed by that contour. (Data analysis and kinemage graphics for Figures 2 and S1 by Daniel Keedy.) To evaluate an individual residue for validation, the appropriate distribution is chosen by a priority heirarchy of Gly,Pro > pre-Pro > Ile/Val > general ( e.g., a Gly that is also a pre-Pro is judged on the Gly distribution, which is the more unusual). That 2D distribution of values (on a 2° grid of ) is interpolated to give the score, which counts as a Ramachandran outlier if it is > 99. 95 (1 in 2000). An analogous procedure is followed for side-chain rotamers, but in the relevant number of dimensions from 1 to 4. The current dataset for rotamers is older, as well as including more divisions and more dimensions, and poor rotamers are flagged only at the 1 in 100 level; future compilations should be able to do better than that.
Shingle-smoothed quartile and outer-percentile plots with outlier datapoints
Producing all-PDB plots of the various validation criteria with smoothed lines for percentile boundaries and individual datapoints for the extreme 1% outliers was more difficult than one would expect. It is immediately evident that score vs. resolution dependencies are not linear (Figures 3, 4 and S3). Quadratic or log-scale fits are not appropriate either: some criteria plateau at high resolutions, while others have a high occurrence of good entries with genuinely zero outliers. The dispersion (vertical distributions) at specific resolutions cannot be modeled by common probability distributions. Many of the validation criteria have a lower bound for good values and a long tail for large outliers, with a shape that does not fit even a Poisson distribution. For such cases, medianbased statistics are more appropriate than mean and standard deviation (such as used for "box and whisker plots"), so the all-PDB distributions are analyzed and reported as percentile scores.
Resolution is clearly the most robust and meaningful measure for the information content of diffraction data, but it is not a precise measure because of both technical and personal-preference differences in exactly how resolution is defined and in how much that value is rounded off. Initial attempts to plot smooth percentile lines from non-overlapping bins of resolution encountered a surprising artefact from this imprecision of definition and rounding: quite consistently for most validation criteria, entries reporting exact tenths of Å for resolution score somewhat worse than entries reporting in-between values (see Figure S2b for clashscore). PDB headers report only two decimal places for resolution, so in the absence of rounding the ratio of exact tenths to in-between values should be only 1:9; in fact the ratio is about 2:1, as shown by the bin counts in Figure S2a. Factoring in year of deposition reduces the discrepancy only by 1/3 to 1/2; the rest is presumably caused by some combination of the tendency to round toward better resolutions, and perhaps that those who report both precise and conservative values also tend to take more care in other ways. Since these factors do not correctly represent the inherent influence of data quality on structure quality, the reference percentile plots for validation criteria should be more suitably smoothed. For the plots shown in this paper, a set of "shingle-overlapped" (in progressive sets of 3) resolution bins was defined (Table S1), producing much smoother lines: compare the quartile lines in Figure 3B with the jagged versions in Figure S2b. In the main text figures, data points for individual entries are shown only outside of the poor 1 percentile line (in red) and the good 99 percentile line (in blue), since the all-PDB distributions are completely saturated toward the center. The resolution-dependent, one-dimensional distribution is shown at one side as a green line with the median, quartile, and extreme-percentile values marked (note that the median is always well above the modal value in these highly skewed distributions). A script to produce these plots in the R statistics program (Team RDC, 2005) was developed jointly by WBA, WS, and JSR and used for Figures 3, 4 and S3; it is available from the web site at http://kinemage.biochem.duke.edu, under "Software". Figure S3 shows percentile plots for rotamer outliers and unsatisfied buried H-bond donors/acceptors, to complete the basic data distributions for all key metrics.
|
v3-fos-license
|
2018-12-01T02:27:28.391Z
|
2016-04-01T00:00:00.000
|
147525800
|
{
"extfieldsofstudy": [
"Sociology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.hrpub.org/download/20160331/UJER23-19506168.pdf",
"pdf_hash": "38728150749537ebd61ca6f5c64b0565b9581b80",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:776",
"s2fieldsofstudy": [
"Education",
"Sociology"
],
"sha1": "38728150749537ebd61ca6f5c64b0565b9581b80",
"year": 2016
}
|
pes2o/s2orc
|
Social Problems in Turkish Social Studies Coursebooks and Workbooks
In Turkey, the social studies course, which is taught in elementary 5 to 7 grades, prepares students to solve problems they may encounter in their future life. Therefore, the teaching of social problems to help students get to know them is one of the most important issues for the social studies course. The primary aim of this study is to examine how social problems are included in the social studies coursebooks and workbooks. 5, 6 and 7 grade social studies coursebooks and workbooks were analysed in the study. Document analysis method, which is one of the qualitative research methods, was used in data analysis. The findings of the study showed that the coursebooks and workbooks of the Ministry of National Education included and emphasized more social problems than those of the private publishing companies. Based on the findings, it is suggested that social studies coursebooks and workbooks published by private publishing companies should be modified in such a way that they include and emphasize more social problems.
Introduction
As a social entity, humans have struggled to survive since their existence, and tried to find solutions to social problems they encountered in this process in different ways. The solution of these social problems brought new developments that led to the emergence of other social problems. This cause-effect relationship has always been there during the history of human race, and human beings had to cope with a set of social problems that affect them constantly.
Although the concept of social problem has a long history, there has not been an agreement on its definition and the concept was defined in various ways from different perspectives in the historical process [5,42]. In this regard, the theorists who adopted the social conflict theory define social problem as the issues that emerge as a result of the pressure of a class created by the capitalist system and that possesses the economic power on another class, whereas feminist theorists view it as the pressure that men create on women. On the other hand, while symbolic interactionists define social problem as the meaning humans attach to events, constructivists define it as the situations by which a group or community claims they are affected based on objective data and subjective views. Functionists who carried out evaluations on this topic approach social problem as negative situations that a mechanism functioning within the social order affects [5,19,22,23,27,30]. Based on these definitions, social problems can be defined as issues that affect one or more groups in the society, and have one or more solutions. In other words, social problems are related to various environments that form the wider structures of social and historical life as a whole and have an influence on each other in an intertwining way [2,40].
Societies try to solve the social problems that they experience within by means of various channels. In order to cope with social problems encountered, one of the most effective ways to apply is education. As a social organisation, education has a balancing function that helps solving social problems [47]. Educational institutions aim to equip individuals with skills to cope with problems that they encounter starting from the first stage of education.
One of the courses in the education system that aim to teach individuals to be sensitive to social problems and the skills to cope with these problems is the social studies course. Within the scope of the social studies course that is taught in the elementary education stage and is an interdisciplinary course based on social sciences, individuals are trained to be active citizens who are sensitive to social problems by teaching them a set of knowledge, skills and values [26].
Social studies is a course that aims to help individuals realize their social existence, reflects social sciences such as history, geography, economy, sociology, anthropology, psychology, philosophy, political science and law as well as citizenship knowledge. It is also a course that involves combining learning areas under a unit or theme, and examines human's interaction with the social and physical environment, and is formed based on the understanding of public education [26]. It is taught by elementary school teacher in 4 th grade, and by social studies teachers in 5 th , 6 th and 7 th grades. The social studies curricula have been prepared in a spiral structure by adopting a constructive approach. In the curricula, there are outcomes, learning areas, skills, concepts, values, activity samples, intermediate disciplines, and the associations among these. Social studies and elementary school teachers carry out the lessons in accordance with the explanations included in the social studies curricula and teacher's books.
As a topic, social problems constitute an important place in the contents of the social studies course. As a matter of the fact, the statements "believes in the important of participation, suggests original views on the solutions of personal and social problems" and "Teacher should confront students with real-life problems and contradictory situations frequently and make them reflect on the social problems they encounter by using the events in and outside the school." included among the overall aims of the social studies curricula, and the explanations related to the implementation of the curricula, respectively, reveal that there is a close relationship between the concepts of social problem and social studies. [26]. In addition, the skills of "dealing with social problems in the environment individually or in cooperation with other, and developing and implementing projects that would contribute to the solution of these problems" are aimed to be taught with regard to social participation as one of the target skills in the social studies curricula [26]. These outcomes and skills that are given in the curricula also show that it is important to define social problems and raise individuals' awareness in this respect.
In the social studies curricula prepared in 2005, there are explanations related to including social problems in the course [26]. This shows that social problems are of importance for the social studies course. Social studies teachers deliver these issues to students through various ways. One of these ways is the use of social studies coursebooks and workbooks. Coursebooks and workbooks are the primary resources that contribute to student learning and help teachers in the teaching process, and continue to be irreplaceable for teachers despite the developing technology in the last decades [36,39]. In this regard, it is of significance that coursebooks and workbooks should be prepared in accordance with the curricula accepted by the Ministry of National Education, and that they should include the knowledge, skills and values that are aimed to be taught to individuals [4,10].
In the literature, there are studies on social problems, and these studies focus on student attitudes and views regarding social problems as well as the social problems that disabled individuals and those who are not disabled are exposed to [15,20,16,33,37]. Besides, there are experimental studies investigating the effect of various variables on sensitiveness to social problems, strategy and scale development studies related to the teaching of social problems, and documentary analysis studies on the concepts that are regarded as social problems [1,3,12,21,24,32,43,46]. There are also studies on social problems carried out in the fields of medicine [16,17,49], sociology [7,8,13,14,18,25,29,31,34,35,38,45,48] and psychology [11]. On the other hand, social studies teachers conduct their lessons in accordance with the outcomes stated in the social studies curricula prepared by the Ministry. Therefore, how social problems are included in the social studies curricula apart from the coursebooks and workbooks is another topic of interest. From this perspective, it seems to be of significance to examine the social problems in the social studies coursebooks and workbooks. This study aimed to examine how social problems are included in the social studies coursebooks and workbooks used at middle school level in Turkey. Answers for following questions will be sought after towards this aim: 1. How are social problems included in the middle school social studies coursebooks and workbooks published by private publishing companies in Turkey? 2. How are social problems included in the middle school social studies coursebooks and workbooks published by the Ministry of National Education in Turkey?
Method
In the study, the documentary analysis method, which is a qualitative research method, was used since the social studies coursebooks and workbooks published by the Ministry and private publishing companies were to be examined. Documentary analysis refers to a systematic process conducted to examine and evaluate printed and electronic documents [50]. It has a set of steps as accessing documents, checking the originality of the documents, understanding the documents, analysing the data, and using the data.
Because there are different definitions of social problem suggested by various paradigms, the paradigm of the study was to be determined before accessing the documents. In this regard, objectivism, which has a definition of social problem that is conceptually clearer in scope, formed the paradigm of the study. According to the objectivist perspective, social problems are social situation that are based on a set of values and scientific judgements, and damage social welfare [28]. Therefore, this definition was adopted in the study.
In the study, all coursebooks that were available through the official website of the Ministry and were taught in the 2015-2015 school year were initially accessed. In Turkey, the social studies coursebooks published by both the Ministry and private publishing companies are distributed to students free of charge. The coursebooks to be distributed are decided by the Ministry's Board of Education and Discipline. The social studies and elementary school teachers inform the administration of their schools about the coursebooks that they have determined among those decided by the board, and the school administrations notify the board about their teachers' choice. The coursebooks that were decided to be distributed for the social studies course in the 2011-2016 school year included the 5 th grade social studies coursebooks and workbooks published by Evren Publishing, the 6 th grade social studies coursebooks and workbooks published by Yakın Çağ Publishing, the 7 th grade social studies coursebooks and workbooks published by Tuna Printing, and the 6 th and 7 th grade social studies coursebooks and workbooks published by the Ministry. The obtained documents were analysed using the content analysis technique, which is a qualitative research method. In content analysis, researchers aim to identify the occurrence of certain words or concepts in a dataset consisting of a text or texts in an objective and systematic way, and then to make inferences based on these identifications [9]. In this way, the documents obtained in the study were analysed using content analysis by taking into consideration the definition of social problem. Certain codes were reached in the analysis, and these codes were combined into themes. The data were presented and interpreted in accordance with the research questions of the study.
In order to enhance the trustworthiness of the study, 20% of the data analysed by the researchers were also analysed by an independent field expert. Barber and Walczak [6] state that 20% of the data being coded by a different field expert independently and the agreement at which the researchers and the experts coding the data independently reach enhance the trustworthiness of a qualitative study with regard to data analysis. The researchers and the experts coding the 20% of the data compared their analyses, and there was a negotiation regarding the data until an agreement was reached on the findings. Consequently, the themes were finalised and the findings were revealed.
Findings on the Coursebooks and Workbooks of
Private Publishers
Findings on 5 th grade coursebook and workbook
In the 5 th grade social studies coursebook, it was found that social problems were included in five units, being "I learn My Rights", "Turkey, Step By Step", "Let's Learn About Our Region", "Workers For the Society", and "One Country, One Flag", but the rest of the units did not include any topics related to social problems.
Based on the social problems found in the 5 th grade social studies coursebook, four themes, which are education, health, natural disasters and environmental problems, were revealed. While education and health were given directly as a social problem, the themes of natural disasters and environmental problems touched upon social problems mostly experienced as a result of these themes. In other words, it can be stated that natural disasters and environmental problems were not regarded as social problems, but they involved social problems that were experienced due to natural disasters and environmental problems. As for the 5 th grade social studies workbook, it included activities related to the themes of education, natural disasters and environmental problems within the same units with the coursebook, but there were no activities related to the health theme in the workbook, and five activities related to these three themes were included. In the 5 th grade social studies coursebook, girls' not being sent to school, the imbalance in the literacy rates of women and men, lack of interest in reading and the education of disabled individuals were mentioned as social problems. These topics were evaluated under the theme of education. Whereas girls' not being sent to school and lack of interest in reading were directly shown as social problems in the coursebook, the imbalance in the literacy rates of women and men was given related to Atatürk's revolutions in the unit "I Learn My Rights", and the problems individuals experienced in this respect were explained. With regard to the education of disabled individuals given in the unit "One Country One Flag", the difficulties disabled individuals and their families had were included as well as the solutions implemented for these problems. As for the 5th grade workbook, the activity "Let's Go to School, Girls!" was included about education in the unit "I Learn My Rights".
The topics of using drugs harmful to health, traffic accidents and contagious diseases, which were shown as social problems, were evaluated under the health theme. In the unit "Turkey, Step By Step" of the coursebook, it was mentioned that many people died due to contagious diseases by referring to the diseases experienced in the years of war by saying "We lost more people from contagious diseases like measles, cholera and tuberculosis than we did in the War of Independence" (p. 41). Students were asked to prepare projects providing solutions for the problems regarding the use of drugs harmful to health and traffic accidents in the unit "I Learn My Rights". However, no activities related to the health theme were included in the 5th grade workbook.
The topics of weather events, environmental pollution, unconscious use of natural resources and global warming, which were given as social problems in the 5th grade coursebook, were evaluated under the theme of environmental problems. The news presented under the title "Snow barrier to education" in the unit "Let's Learn About Our Region" of the 5th grade coursebook reads as follows: Due to the snowfall that has affected Kütahya for the last three days, education is suspended in the schools located in the city centre and the districts of Altıntaş, Dumlupınar and Hicarcık as well as the elementary schools with mobile education located in Emet and Çavdarhisar districts. The snowfall also caused the closure of 330 village roads in the districts of Altıntaş, Dumlupınar, Aslanapa, Emet and Çavdarhisar [51].
This excerpt can be shown as an example to social problems that the local community experienced in transportation and education due to the snowfall. In the same unit of the 5 th grade workbook, a survey activity related to the unconscious use of natural resources was also included with regard to the theme of environmental problems.
With respect to the theme of natural disasters, it included the difficulties and problems that masses had following the disasters that happened in Turkey such as erosion, earthquake, snow slide, flood and fire In addition, those disasters that happened outside Turkey, such as the earthquake in Pakistan and the tsunami in Japan, were also mentioned. In the coursebook, a piece of news titled "5 Million Affected" telling the earthquake in Pakistan and the following social and economic problems were presented.
As a result of the rainfalls in September and October, 240 people are dead in the floods which engulfed the Sind Province in southern Pakistan, and more than 5 million had to leave their homes. In the region where 700 thousand houses were damaged due to the floods, agriculture and stock breeding, which are vitally important for the country's economy, got a severe blow. 1,5 million hectares of farmland were submerged, 688 thousand hectares of cultivated land were damaged, and hundreds of thousands of small and large animals perished [51].
In the unit "Let's Learn About Our Region" of the 5 th grade workbook, activities mentioning the damages of natural disasters and Turkey's history of earthquakes were given along with the "Kızılırmak Folk Song" under the theme of natural disasters [52].
Findings on 6 th grade coursebook and workbook
In the 6 th grade social studies coursebook, social problems were mentioned in five units, being "Life on Earth", "Turks on the Silk Road", "Our Country's Resources", "Our Country and the World", and "the Electronic Century", while the rest of the units did not include any reference to social problems.
Based on the social problems found in the 6 th grade social studies coursebook, four themes, which are environmental problems, health, natural disasters and violation of human rights, were revealed. While the issues evaluated under the themes of environmental problems, health and violation of human rights were given directly as social problems, those evaluated under the theme of natural disasters referred to social problems occurred due to natural disasters. In this regard, it can be stated that the theme of natural disasters was not included in the coursebook as a social problem by itself, but the difficulties experienced as a result of natural disasters were considered as social problems. As for the 6 th grade social studies workbook, it included activities related to the themes of environmental problems, health and wage theft within the same units with the coursebook, but there were no activities related to the theme of natural disasters in the workbook, and six activities related to these three themes were included. Environmental problems was one of the themes revealed based on the social problems identified in the 6th grade coursebook. In the coursebook, the social problems related to environmental pollution, unconscious use of natural resources and extinction of species were evaluated under the theme of environmental problems. The statements in the 6th grade coursebook such as "excessive hunting of seals and whales at the poles causes the danger of their extinction" (s. 39) can be shown as examples of the social problems that can emerge related to environmental problems. In addition, in the unit "Our Country's Resources" of the coursebook, a reading text titled "The ecological problems caused by the tanker accident" explained how a tanker accident happened in the Black Sea affected the lives of species [53]. In the 6th grade workbook, activities which are designing projects, preparing posters to improve sensitivity to environment, and "Hand in hand in difficult times" for environmental pollution were included in the units "Life on Earth", "Our Country's Resources", and "Our Country and the World".
Environmental problems were another theme revealed based on the social problems identified in the 6th grade 860 Social Problems in Turkish Social Studies Coursebooks and Workbooks coursebook. Under this theme, social problems related to contagious diseases were included. With regard to contagious diseases in the unit "Electronic Century" of the coursebook, diseases such as AIDS, bird flu, swine flu, typhoid, tuberculosis and hepatitis B-C, and the destruction caused by these diseases on human body were mentioned. Besides, while presenting historical issues in the unit "Turks on the Silk Road", it was also mentioned that contagious diseases are so important as a social problem that they can force people to emigrate. As for the unit "Electronic Century" in the 6th grade workbook, it included a poster activity titled "Contagious diseases".
The last theme revealed in the 6 th grade coursebook was natural disasters. In the coursebook, the difficulties and problems that masses experience after the natural disasters in Turkey and the world such as erosion, earthquake and fire. In the unit "Our Country and the World" of the coursebook, it was highlighted that the problems experienced had a cause-effect relationship with each other by mentioning that some of the natural disasters happened in Turkey were due to the unconscious use of natural disasters and these disasters caused deaths. In addition, the earthquake in Pakistan and the tsunami in Japan were mentioned under the topic "The Whole World Hand in Hand" and these pieces of news were the same with the 5 th grade coursebook. However, no activities related to the theme of natural disasters were found in the 6 th grade workbook.
Under the theme of the violation of human rights, the topic "Anti-Piracy" was included, and it was emphasised that unauthorised reproduction of copyright material does not only have serious damages to individuals but also to the state in terms of economy, and this is one of the important problems in Turkey lately. With respect to this theme, in the same unit of the 6 th grade workbook, an activity titled "Say No to Piracy" was included, and students were asked to prepare a poster on this topic [54].
Findings on 7 th grade coursebook and workbook
In the 7 th grade social studies coursebook, social problems were mentioned in five units, being "Communication and Human Relations", "Population in Our Country", "Economy and Social Life", "Living Democracy", and "Bridges Between Countries", while the rest of the units did not include any reference to social problems.
Based on the social problems found in the 7 th grade social studies coursebook, four themes, which are environmental problems, population, health and war, were revealed. While the issues evaluated under the themes of environmental problems, health and war were given directly as social problems, those evaluated under the theme of population referred to social problems occurred due to the rapid growth of population. In this regard, it can be stated that the theme of population was not included in the coursebook as a social problem by itself, but the difficulties experienced as a result of the population growth such as immigration, unemployment and irregular urbanisation were considered as social problems. Besides, it was found that the social problems given under the theme of environmental problems in the coursebook such as immigration and natural disasters were emphasized to be the source of other social problems. As for the 7 th grade social studies workbook, it included activities related to the themes of environmental problems, health and population within the same units with the coursebook, but there were no activities related to the themes of health and war in the workbook, and five activities related to the other two themes were included. Environmental problems was one of the themes revealed based on the social problems identified in the 7 th grade coursebook. In the coursebook, the social problems related to environmental pollution, destruction of forests, extinction of species and global warming were evaluated under the theme of environmental problems. The following statements presented in the unit "Bridges Between Countries" can be given as an example of the negative effect of global warming on species: The extension of ice-free period with global warming causes hunger in polar bears living in the Arctic. The polar bears who couldn't find food have started to eat each other. Environmentalists point out that polar bears may become extinct until the end of this century with the melting ice caps due to global warming [55].
In addition, the same unit of the coursebook also included a topic titled "Global Warming" which explains that ice caps are melting due to climate change because of global warming, and the rise in sea levels as a result of this melting lead to the Hurricane Katrina in America. In the book, it was emphasized that a social problem can cause other local and global social problems by mentioning the deaths and other social problems such as contagious diseases, homelessness and economic difficulties as a result of the hurricane. In the 7 th grade workbook, the unit "Communication and Human Relations" included the Greenpeace actions within an activity titled "Save the Mediterranean with a Click" related to the theme of environmental problems, and the unit "Living Democracy" included the garbage problem within an activity titled "Those Who Direct the Agenda". Besides, the unit "Bridges Between Countries" included a piece of news titled "The World is Warming Up", a concept map and problem solving activities.
Health was another theme revealed based on the social problems identified in the 7 th grade coursebook. Under this theme, social problems related to contagious diseases were included in the unit "Bridges Between Countries". In the coursebook, the contagious diseases such as AIDS and malaria, and organisations fighting against these diseases such as the World Health Organization (WHO) were presented under the title of social problems. In the 7 th grade workbook, although there were no activities directly related to the health theme, the concept map activities of the environmental problems were associated with health.
Another theme revealed in the 7 th grade coursebook was the war theme. Under this theme, social problems experienced in World War I and afterwards presented in the unit "Bridges Between Countries" were included. In the coursebook, while it was told that war affect people as a social problem, how the difficulties experienced in war affect people was also mentioned with the statement "We lost thousands of our soldiers because of contagious diseases and cold" (p. 166) [55]. In the 7 th grade workbook, although there were activities included related to the war theme, these were in the intellectual dimension and no activities were included with regard to the social problems caused by war.
The last theme revealed from the 7 th grade coursebook was population. The social problems in the coursebook related to unemployment, irregular urbanisation, immigration, inadequate infrastructure, traffic problem and poverty were evaluated under the population theme. In the unit "Life on Earth" of the 7 th grade coursebook, the following statements were included to show what problems are experienced in human life as a result of rapid population growth: The population of our major cities, particularly that of İstanbul, increases rapidly every passing day as a result of the imbalance in the distribution of population. Thus, our people living in cities face various problems. The main problems include unemployment, irregular urbanisation, inadequate infrastructure and traffic jam [55].
Besides, in the coursebook, it was emphasized that people have to immigrate due to rapid population growth with the statement "Some people move to another city or country from their birth place due to reasons such as meeting their economic needs and make their living" (p.44). In the unit "Population in Our Country" of the 7 th grade workbook, the activities "Population Problems" and "Results of Immigration" were included related to the population theme [56].
Findings on 6 th grade coursebook and workbook
Social problems were found to be mentioned in all the units of the 6 th grade social studies coursebook. Based on the social problems found in the coursebook, five themes, which are environmental problems, health, natural disasters, population and violation of human rights, were revealed. While the issues evaluated under the themes of environmental problems, health, population, and violation of human rights were given directly as social problems, those evaluated under the theme of natural disasters referred to social problems occurred due to natural disasters. In this regard, it can be stated that the theme of natural disasters was not included in the coursebook as a social problem by itself, but the difficulties experienced as a result of natural disasters were considered as social problems. As for the 6 th grade social studies workbook, it included activities related to the themes of environmental problems, health, natural disasters, and violation of human rights within the same units with the coursebook, but there were no activities related to the population theme in the workbook, and nine activities related to the other four themes were included. Environmental problems were the first of the themes revealed based on the social problems identified in the 6th grade coursebook. In the coursebook, the social problems related to environmental pollution, unconscious use of natural resources, global warming, destruction of forests, and extinction of species were evaluated under the theme of environmental problems. In the 6th grade courseboook, this theme can be exemplified with the topic of how unconscious use of natural resources affects the life of species in a section titled "The World is Alarming" in the unit "Resources of Our Country", and the news titled "To Save the Black Sea" describing the rapid pollution growth in the Black Sea in the 862 Social Problems in Turkish Social Studies Coursebooks and Workbooks unit "Our Country and the World". As for the 6th grade workbook, it included the activities "What could be the solution?", "What is the solution?", "Do Not Let Turkey Become a Desert", "Town of Ulubey" and "Let's Collect Waste Batteries" in the units "I Learn Social Studies" and "Resources of Our Country" in parallel to the coursebook [58].
Health was another theme revealed based on the social problems identified in the 6th grade coursebook. These social problems related to contagious diseases, malignant diseases and need for blood were evaluated under the health theme. In the unit "Electronic Century" of the coursebook, the Crimean-Congo haemorrhagic fever (CCHF) and flu were mentioned, and the damages that these diseases cause in human life and the ways of protection from them were described [57]. Besides, it was emphasized in the same unit that the number of donors should increase gradually by attracting the attention to blood donation. In the 6 th grade workbook, the newspaper article titled "Medicine and the Society" was presented in the unit "Electronic Century" [58].
Another theme revealed in the 6 th grade coursebook was natural disasters. Under this theme, the unit "Our Country and the World" of the coursebook included the problems that masses experience after floods and earthquakes in various countries and the help sent by Turkey to these countries. The unit "Resources of Our Country" of the 6 th grade workbook included the activity "Do Not Let Turkey Become a Desert" related to erosion.
In the units "I Learn Social Studies", "Turks on the Silk Road", "Adventure of Democracy" and "Electronic Century" of the 6 th grade coursebook, the topics of violence against women, rights of disabled individuals, gender discrimination, wage theft, and discrimination were evaluated under the theme of violation of human rights. In the coursebook, it was emphasized that today, disabled individuals experience great difficulties due to architectural inadequacies, and individuals suffer from piracy. Besides, examples related to human rights were provided from the history, and the incidents of violence against women happened in the pre-Islamic period and the gender discrimination in pre-Republican period were mentioned. The 6 th grade workbook also included the activities titled "What Could Be the Solutions?" and "We Are Against Piracy" [58].
The last theme revealed from the social problems in the 6 th grade coursebook was population. Related to this theme, the traffic problem due to the rapid population growth was mentioned in the unit "I Learn Social Studies", and that various difficulties were experienced in the history because of the population growth and this growth caused immigration was given in the unit "Turks on the Silk Road". However, no activities were included in the 6 th grade workbook in this respect.
Findings on 7 th grade coursebook and workbook
Social problems were found to be mentioned in all the units of the 7 th grade social studies coursebook. Based on the social problems found in the 7 th grade social studies coursebook, seven themes, which are environmental problems, population, war, health, immigration, education, and violation of human rights, were revealed. While the issues evaluated under the themes of environmental problems, health, war, education and violation of human rights were given directly as social problems, those evaluated under the themes of population and immigration referred to social problems occurred due to the rapid growth in population and immigration. In this regard, it can be stated that the themes of population and immigration were not included in the coursebook as a social problem by themselves, but the issues such as unemployment and irregular urbanisation experienced as a result of the population growth and immigration were considered as social problems. As for the 7 th grade social studies workbook, it included activities related to the themes of environmental problems, immigration, health, war and population within the same units with the coursebook, but there were no activities related to the themes of education and violation of human rights in the workbook, and eight activities related to the other five themes were included. Environmental problems was one of the themes revealed based on the social problems identified in the 7 th grade coursebook. In the coursebook, the social problems related to environmental pollution, destruction of forests, unconscious use of natural resources, global warming, and the resulting climate change and desertification were evaluated under the theme of environmental problems. In the unit "Living Democracy" of the 7 th grade coursebook, the text titled "Pollution Alert in Creek"(p. 158) under the topic "Environment Law Is the Concern of All of Us" can be shown as an example of social problems related to environmental problems. Besides, in the topic titled "Global Solutions to Global Problems" in the unit "Bridges Between Countries" of the coursebook, it was emphasized that environmental pollution is not only a national matter but also a global issue, and constitutes a major threat for the world by saying "Soil, water, air are all polluted rapidly, and the greenhouse gasses accumulated in the atmosphere cause global warming and climate change. In the process of climate change, more severe drought, floods, storms, contagious diseases and environmental pollution threaten our world" (p. 176) [59]. Similar to the coursebook, the 7th grade workbook also included the activity "The First City That Technology Changed" in the unit "Economy and Social Life" and preparing posters, drawing caricatures and problem solving tasks under the activity "From the Perspective of Caricatures" in the unit "Bridges Between Countries" [60].
Immigration was another theme revealed based on the social problems identified in the 7 th grade coursebook. Under this theme, the social problems that cause immigration and that are caused by immigration were dealt with, and ethnic and religious pressures and brain drain were mentioned. Besides, it was highlighted that population growth also caused immigration. The following statements presented in the unit "Population in Our Country" of the 7 th grade coursebook describe how the social problems caused by immigration affect human life: Immigration from the countryside leads to population growth in cities. The rapid growth of population causes housing shortage. The settlement areas expand as a result of immigration, and industrial facilities becomes part of the city. The agricultural fields in the immediate environment start to be used for different purposes. As is seen in the photograph on the left hand side, the traffic get busier. Schools and hospitals can no longer meet the demand. Investments towards population growth also become a burden for the country's economy [59].
At the same time, the social problems that cause immigration were also provided in the coursebook. With respect to immigration, the coursebook provided the following statements: With the Lozan Treaty following the War of Independence, the population exchange aggreement was signed between Turkey and Greece. Accordingly, Turks migrated from Greece to Turkey, and Greeks from Turkey to Greece. Due to regime, ethnic and religious pressures, a number of our compatriots living in the Balkans migrated to Anatolia [59].
In these statements, the issue of population exchange was explained, and it was emphasized that elements such as terrorism, war, ethnic and religious pressures, and regime also cause immigration. As for the 7 th grade workbook, it included the activities titled "Now, Migration for Water" and "Our New Homeland", and a performance project titled "From where we were born to where we make our living" [60].
Population was another theme revealed from the 7 th grade coursebook. The social problems in the coursebook related to unemployment, irregular urbanisation, immigration, inadequate infrastructure, traffic problem and poverty were evaluated under the population theme. These themes were also associated with the immigration theme. In the unit "Population of Our Country" of the 7 th grade coursebook, it was emphasized that population growth also causes other social problems by saying "The world's population, which gets close to seven billion people, continues to grow rapidly. Rapid population growth and the consequent increase in consumption drain the natural resources of the world as well as cause many global problems" (p. 45) [59]. In addition, it was highlighted that the rapidly growing population in Turkey also causes the unemployment problem. Based on these examples, it can be stated that students' interest was attracted both national and global problems with regard to the population theme in the coursebook. Moreover, the unit "Population of Our Country" of the 7 th grade workbook included an activity titled "Population as the Machine of Development".
Another theme revealed in the 7 th grade coursebook was the war theme. The social problems due to the wars and civil wars throughout the history as well as terrorism, and class and boundary conflicts were evaluated under the war theme. In the coursebook, it was mentioned that war and terrorism are global social problems and have serious effects on people. Besides, the United Nations, which was found to ensure international peace and security, was also mentioned. As for the 7 th grade workbook, although there were no activities directly related to the war theme, the problems caused by war were mentioned in an activity titled "Media" in the unit "Communication and Human Relations".
In the 7 th grade coursebook, women and child workers, colonialism, slavery, and gender discrimination were evaluated under the theme of violation of human rights. In the unit "Economy and Social Life" of the coursebook, it was stated that the working class that emerged with the industrial revolution caused various social problems, and the number of women and child workers increased every other day. In addition, while elaborating on history in the same unit, colonialism and people who worked as slaves were also mentioned. However, no activities were included in the 7 th grade workbook with respect to this theme.
Health was another theme revealed based on the social problems identified in the 7 th grade coursebook. The social problems related to contagious diseases and death were evaluated under this theme. In the coursebook, it was pointed out that contagious diseases like AIDS and malaria are a global social problem. As for the 7 th grade workbook, the activity titled "Towards the Solution" was included, and that the average life expectancy is 32 years in Swaziland due to the HIV virus and tuberculosis.
The last theme revealed from the 7 th grade coursebook was education. In the unit "Population of Our Country" of the coursebook, the social problems due to girls' not being sent to school and the literacy rate were included in related to the education theme. However, these problems were not dealt with separately, but it was mentioned that they were social problems. On the other hand, no activities were included in the 7 th grade workbook in this respect.
Social Problems in Turkish Social Studies Coursebooks and Workbooks
During the research process, it was observed that the techniques used related to social problems showed variation across the publishers. Whereas the workbooks published by private publishing companies included activities such as preparing posters, question-answer, crossword puzzles, problem solving, and preparing projects, the workbooks of the Ministry of National Education were observed to contain, in addition to those in the private publishers' books, activities of fish bone, composition writing and newspaper articles.
Comparison of the Courseboks and Workbooks of the Ministry and Private Publishers in terms of their Inclusion of Social Problems
As is shown in Table 1, the social studies coursebooks and workbooks published by the Ministry and the private publishing firms were different from each other in terms of their inclusion of social problems. In the coursebooks and workbooks that were prepared based on the same curricula, there were eight units in 5 th grade, seven units in 6 th grade and seven units in 7 th grade. It was found that in the 5 th grade coursebooks and workbooks, social problems were included in five of the eight units, and these five units contained the themes of education, health, natural disasters, and environmental problems. Since only the coursebook and workbook published by a private firm, Evren Publishing, was used in 5 th grade, it was not possible to make a comparison between the books of the private publishers and the Ministry for this grade level.
In the 6 th grade coursebooks and workbooks, there are seven units in total. Those of the private firms included social problems in five of these seven units, and these five units contained four themes, which were environmental problems, health, natural disasters, and human rights. On the other hand, those of the Ministry included social problems in all of the seven units, and in addition to the themes in the private firm counterparts, the books of the ministry also touched upon the population theme and thus contained five themes. In this regard, it can be stated that the coursebook and workbook of the Ministry included more themes related to social problems in more units compared to those of the private firms.
The 7 th grade social studies coursebooks and workbooks included seven units. Those of the private firms incorporated social problems in five of these seven units, and these five units had four themes, which were environmental problems, health, population, and war. On the other hand, those of the Ministry included social problems in all of the seven units, and in addition to the themes contained in the books of the private firms, the books of the ministry also had themes of violation of human rights, immigration, and education, and thus, contained a total of seven themes. Therefore, it can be said that the coursebook and workbook of the Ministry included more themes related to social problems in more units compared to those of the private firms.
Results, Discussion and Suggestions
Based on the findings obtained from the social studies coursebooks and workbooks published by private publishing companies, four themes were revealed from the social problems given in all grade levels. In this sense, the themes of education, health, natural disasters and environmental problems were found in the 5 th grade coursebook and workbook, the themes of environmental problems, health, natural disasters and violation of human rights in the 6 th grade coursebook and workbook, and the themes of environmental problems, population, health and war in the 7 th grade coursebook and workbook. In addition to the four themes revealed in each grade level, five activities were included in the 5 th grade workbook, six activities in the 6 th grade workbook, and five activities in the 7 th grade workbook. While the themes of education, health, violation of human rights and war were regarded directly as social problems in the social studies coursebooks and workbooks of the private publishers, it was emphasized that the themes of natural disasters, environmental problems and population were not social problems by themselves, but they were causes to other social problems.
Based on the findings obtained from the social studies coursebooks and workbooks published by the Ministry, four themes were revealed in 5 th grade, five themes in 6 th grade, and seven themes in 7 th grade. Social problems were found to be shaped by the themes of education, health, natural disasters and environmental problems in the 5 th grade coursebook and workbook, the themes of environmental problems, health, natural disasters, population and violation of human rights in the 6 th grade coursebook and workbook, and the themes of environmental problems, population, war, health, immigration, education and violation of human rights in the 7 th grade coursebook and workbook. In addition, based on the grade levels, five activities were included in the 5 th grade workbook, nine activities in the 6 th grade workbook, and eight activities in the 7 th grade workbook. While the themes of education, health, violation of human rights and war were regarded directly as social problems in the social studies coursebooks and workbooks published by the Ministry, it was emphasized that the themes of natural disasters, environmental problems, immigration and population were not social problems by themselves, but they were causes to other social problems.
Comparing the themes and activities of social problems in the coursebooks and workbooks of the private publishers and the Ministry, it was found that the books published by the Ministry gave wider coverage to social problems than those published by the private publishers. Besides, the books of the Ministry included more activities and more diverse themes than those of the private publishers. For example, although immigration was covered in the books by both types of publishers as a topic, those of the private publishers did not regard immigration as a social problem, whereas the books of the Ministry presented immigration as well as its causes and results as a social problem.
With respect to poverty, Yakar [43] found that it was included as a social problem in the unit "Bridges Between Countries" of the 7th grade social studies coursebook, while Adalar [1] revealed that it was not included as a social problem in the social studies coursebooks. Therefore, the results of this study and other studies in the literature show differences. Moreover, this study is also different in terms of examining the social problems in workbooks in addition to coursebooks.
The workbooks of both types of publishers also showed differences in terms of the activities they included. The workbooks of the private publishers mostly included activities such as newspaper articles, concept maps, preparing posters, crossword puzzles, preparing projects and problem solving, whereas those of the Ministry included, in addition, activities such as fish bone and composition writing.
In the coursebooks of both types of publishers, social problems were found to be included in both national and global contexts. Similar to this result of the study, Yakar [43] also revealed that poverty was dealt with in both global and national contexts. Besides, social problems were not only included within current events, but also events in the history. However, it was also found that some of the examples given during the presentation of certain topics were repeated in different grade levels.
In the study, the social studies coursebooks and workbooks of the private publishers were determined to include less social problems than those of the Ministry did. Based on this result, it can be suggested that private publishing companies should give more coverage to social problem in their social studies coursebooks and workbooks. Besides, there were fewer explanations related to social problems in the explanation section of the social studies curricula, when considered how social problems were included in the social studies coursebooks. In the study, it was found that the examples towards social problems were repetitive in the coursebooks. Therefore, examples should be made diverse. In addition, the social problems in the social studies coursebooks and workbooks should be updated and made more interesting.
|
v3-fos-license
|
2021-12-19T16:05:25.476Z
|
2021-12-01T00:00:00.000
|
245303302
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1093/geroni/igab046.1429",
"pdf_hash": "26cafbb10a5dee3cdbf51d3e59bd6fbe5309c896",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:777",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "e52d4673eb633712319a98a72f4c4a1fff3ae0f8",
"year": 2021
}
|
pes2o/s2orc
|
Does Long-Term Care Policy Enable or Limit Volunteers’ Roles in Enhancing Resident Quality of Life?
Abstract This paper examines how volunteer roles are represented in Canadian long term care (LTC) policy in four Canadian jurisdictions, attending to how these regulated roles might impact resident quality of life. Overall, we found that policies define volunteer roles narrowly, which may limit residents’ quality of life. This happens through (1) omitting volunteers from most regulatory policy, (2) likening volunteers to supplementary staff rather than caregivers with unique roles, and (3) over-emphasizing residents’ safety, security and order. We offer insights into promising provincial policy directions for LTC volunteers, yet we caution against further regulating volunteers. Instead, we argue, addressing the cultural, social and structural changes required for volunteers to enhance LTC residents’ quality of life effectively.
jurisdictions. We highlight how 11 different quality of life domains are supported and which texts offering promising policy language to enhance a well-rounded quality of life for residents. These are timely insights to offer as policy-makers look to the future and consider the lessons learned from the pandemic. We contend that creating more LTC policy is not a timely pathway forward to LTC reform. Instead, we suggest that existing policy can be leveraged when applied within a resident-centred quality of life lens. We will guide attendees through examples of existing promising policies highlighting how they might leveraged in planning for a better LTC system. The discussion will be rooted in our unique residentcentred approach to policy analysis using specific domains of quality of life and then applied to four different perspectives: residents, families, staff and volunteers. Our discussant a Ministry of Health decision-maker will address the implications of our research for post-pandemic planning to improve resident quality of life
FAMILIES AS VALUED CONTRIBUTORS TO LTC RESIDENTS' QUALITY OF LIFE: POLICY PERSPECTIVES Janice Keefe, Mount Saint Vincent University, Halifax, Nova Scotia, Canada
Family members are essential contributors to Q0L of LTC residents. This paper analyzes how the system views family's role in residents' QoL and enables or inhibits family involvement. Our analysis of 21 policies that regulate LTC in four Canadian Provinces reveal differences in their portrayal of residents' families. In many policies, family roles are characterized procedurally (task-oriented) or relationally (interactive) by policy type. Operational standards (regulatory policies) linked to licensing employ more formal terminology, while LTC program guidelines use facilitative language to engage families and build relationships through voluntary means. Specific examples of orientation and admission procedures, care protocols including use of restraints, right to live at risk, and end-of-life care are presented to reveal inter-provincial variations. We argue there are opportunities to further engage families within the current regulatory framework.
PROMISING LONG-TERM RESIDENTIAL CARE POLICY GUIDANCE FOR STAFF TO SUPPORT RESIDENT QUALITY OF LIFE Mary Jean Hande, Mount Saint Vincent University, Halifax, Nova Scotia, Canada
This paper reviews 63 policy documents in four Canadian jurisdictions that guide long term residential care staff on how to enhance 11 resident quality of life in Canada. We found guidance in each jurisdiction that provide clear language to support staff discretion and flexibility to navigate regulatory tensions and enhance resident quality of life. Newer policies tend to reflect more interpretive approaches to staff flexibility and broader quality of life concepts. We argue that if interpreted through a resident quality of life lens and with the right structural supports, these promising texts offer important counters to the rigidity of long term residential care policy landscape and can be leveraged to effectively broaden and enhance quality of life for residents in long term residential care.
TRACING THE EXPRESSION OF RESIDENT QUALITY OF LIFE POLICIES IN CANADIAN LONG-TERM CARE SETTINGS Janice Keefe, and Pamela Irwin, Mount Saint Vincent University, Halifax, Nova Scotia, Canada
Policies favouring safety, security, and order are expressed in preference to those oriented towards personcentred resident quality of life in Canadian long-term care settings. Factors impacting the expression of these latent (under-utilised) rules were uncovered through an analysis of long-term care related policies in four provinces. 84 policies relating to resident quality of life in long-term care were analysed in three sequences, incorporating jurisdictions, policy types, and quality of life domains, over time. The analysis revealed three policy levers: situations-providing explicit and implicit examples of resident oriented quality of life policy suppression in each jurisdiction; structures-identifying which types of policy and quality of life expressions are more vulnerable to dominance by others; and trajectories-confirming the cultural shift towards more person-centredness in Canadian long-term care related policies over time. Although these policies exist, their potentiality remains dormant in the dominant policy discourse, thereby signaling a positive postpandemic possibility.
DOES LONG-TERM CARE POLICY ENABLE OR LIMIT VOLUNTEERS' ROLES IN ENHANCING RESIDENT QUALITY OF LIFE? Emily Hubley, and Mary Jean Hande, Mount Saint Vincent University, Halifax, Nova Scotia, Canada
This paper examines how volunteer roles are represented in Canadian long term care (LTC) policy in four Canadian jurisdictions, attending to how these regulated roles might impact resident quality of life. Overall, we found that policies define volunteer roles narrowly, which may limit residents' quality of life. This happens through (1) omitting volunteers from most regulatory policy, (2) likening volunteers to supplementary staff rather than caregivers with unique roles, and (3) over-emphasizing residents' safety, security and order. We offer insights into promising provincial policy directions for LTC volunteers, yet we caution against further regulating volunteers. Instead, we argue, addressing the cultural, social and structural changes required for volunteers to enhance LTC residents' quality of life effectively.
MAINTAINING ENERGY: A POTENTIAL TRANSFORMATIVE POWER TO ADAPT TO THE CHALLENGES OF OLDER AGE? Chair: Rebecca Ehrenkranz
Reduced energy is a hallmark feature of aging. Maintaining higher energy late in life may be a key adaptive strategy to the challenges that accompany older age and ultimately promote resilience. Perceived lack of energy is often construed as synonymous with fatigue, and energy and fatigue are frequently considered opposite aspects of the same phenomenon. However, evidence suggests that energy and fatigue have distinct underlying neurobiology. Further exploration of the energy/fatigue dichotomy is needed in community-dwelling 368 Innovation in Aging, 2021, Vol. 5, No. S1
|
v3-fos-license
|
2021-06-06T05:16:12.558Z
|
2021-02-01T00:00:00.000
|
235345265
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ojs.wpro.who.int/ojs/index.php/wpsar/article/download/795/1011",
"pdf_hash": "5a8f027b069bc3a450ca8cd4f4e157bf7c5c6bd7",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:778",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "5a8f027b069bc3a450ca8cd4f4e157bf7c5c6bd7",
"year": 2021
}
|
pes2o/s2orc
|
Seroepidemiology of SARS-CoV-2, Yamagata, Japan, June 2020
We conducted a seroepidemiological study in a northern Japanese prefecture where the incidence of identified COVID-19 cases was low. In June 2020, residual sera from 1,009 outpatients were tested for antibody to SARS-CoV-2 by electrochemiluminescence immunoassay. Five specimens (0.5%) tested positive, suggesting low prevalence of SARS-CoV-2 infections in this population.
I n Japan, the first case of coronavirus disease 2019 (COVID-19) was identified in mid-January 2020, and cases peaked in the spring at 720 cases per day on 11 April. Thereafter, the number of reported cases per day declined to 50 on 15 May and remained low until mid-June, when numbers again started to increase. On 5 August, 1234 cases were reported, giving a cumulative total of 40 485 cases, with a case fatality proportion of 2.5% (1021 deaths). 1 Although COVID-19 is designated as a reportable disease in Japan, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) testing capacity was limited in the early stage of the pandemic. It took up to 4 days for specimens to be tested by reverse transcription polymerase chain reaction (RT-PCR). The Japanese Government recommended that anyone with mild illness symptoms should stay at home, to avoid overwhelming healthcare facilities. SARS-CoV-2 testing was prioritized for hospitalized patients and those with chronic comorbidities. Thus, the true number of symptomatic cases of COVID-19 in Japan is likely to be far greater than the number of reported cases.
In one Chinese study, SARS-CoV-2-specific immunoglobulin IgG and IgM were detected in serum samples from most patients (asymptomatic or symptomatic) who were diagnosed with SARS-CoV-2 by RT-PCR. 2 This finding implies that seroepidemiological studies can be used to estimate the infection rate of SARS-CoV-2 in a population. Estimating the point prevalence of SARS-CoV-2 infections might be helpful in assessing population susceptibility, and in balancing public health control measures with the reopening of social and economic activities. Results from several seroepidemiological studies have been published, with seroprevalence reported from Spain (5%), Switzerland (10.8%) and the United States of America (1-6.9%, 4.65% and 14%). [3][4][5][6][7] These studies were performed in countries where the incidence of COVID-19 was high. In countries in the Asia-Pacific, where COVID-19 incidence was low, a few SARS-CoV-2 seroepidemiology studies have been conducted that are not population based. Among these studies, seroprevalence was 7.6% from a single-centre study of outpatients and their guardians in the Republic of Korea, and 0.4% in a study using residual sera collected at a single hospital in Malaysia. 8,9 We conducted a cross-sectional seroepidemiological study in Yamagata Prefecture, an urban-rural area in northern Japan, where the incidence of reported COVID-19 cases was 0.007% (i.e. 76 cases among a population of about 1.07 million, as of 5 August 2020). 1 This is lower than the overall incidence of COVID-19 cases reported throughout Japan (0.034%), and lower than in most Japanese prefectures and the Tokyo metropolitan area (0.102%); however, it is higher than in some low-incidence prefectures (0-0.002%). 1 Residual sera obtained from patients who visited the outpatient clinic of Yamagata University Hospital for any acute medical condition during 1-4 June 2020 were tested for SARS-CoV-2 antibody. Blood samples were collected for clinical diagnostic purposes and, after use, were de-identified before serological testing was performed. Because samples were de-identified, individual consent was not obtained. This study was approved by the Ethics Committee of Yamagata University School of Medicine.
Ethical statement
Because samples were de-identified, individual consent was not obtained. This study was approved by the Ethics Committee of Yamagata University School of Medicine.
IgA antibody -to the nucleocapsid protein of SARS-CoV-2. A cut-off optical density (OD) index value of 1.0 was used to define a seropositive result. According to the manufacturer's fact sheet, the specificity of the serological assay is 99.80% (i.e. 21 false positives among the 10 453 specimens collected before December 2019). 10 Among 1009 samples tested, five specimens were positive for SARS-CoV-2 antibody. The estimated seroprevalence of SARS-CoV-2 infections was 0.50% (95% confidence interval [CI]: 0.062-0.93%). The OD values of five seropositive specimens varied substantially; two had OD values close to the cut-off index value (1.3 and 1.6), suggesting low antibody titres, and three were above 5.0. Using the 95% CI for the seroprevalence estimate of 0.50%, we estimated that the Yamagata Prefecture population had 670-10 000 SARS-CoV-2 antibody-positive individuals.
Our study has several limitations. First, sera used in this study were obtained from patients visiting our hospital's outpatient acute care clinic; hence, this sample is probably not representative of the general population of Yamagata Prefecture. Also, because the serum specimens were de-identified, we did not have any demographic data to determine representation across age groups. Second, the specificity of the assay suggests an anticipated false positive rate of 0.20%, which may affect the reliability of the estimated seroprevalence in our study. Third, in a population with a low prevalence of SARS-CoV-2 infections, as was the case in Yamagata, false positives are more likely than in a population with high prevalence. Slight modification of the assay seropositive cut-off index value (e.g. from 1.0 to 1.6) would reduce the estimated seroprevalence. For example, if only the three strongly positive serum samples were considered to be true seropositive results, the estimated seroprevalence would be 0.30% (95% CI: 0-0.63%).
This cross-sectional seroepidemiological study in Yamagata Prefecture, Japan, identified low seroprevalence of SARS-CoV-2 antibody, suggesting that the population is highly susceptible to SARS-CoV-2. Additional studies with population-based sampling are needed to assess the impact of SARS-CoV-2 in this population over time.
|
v3-fos-license
|
2023-01-22T06:16:10.478Z
|
2023-01-01T00:00:00.000
|
256055481
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/23/2/973/pdf?version=1673685675",
"pdf_hash": "36c4a235b9d440e7af595ec3e2c87bbceac8c178",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:781",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "2e7116edea289c30637ff1a1714e6b86bf5e51dc",
"year": 2023
}
|
pes2o/s2orc
|
Multi-Threaded Sound Propagation Algorithm to Improve Performance on Mobile Devices
We propose a multi-threaded algorithm that can improve the performance of geometric acoustic (GA)-based sound propagation algorithms in mobile devices. In general, sound propagation algorithms require high computational cost because they perform based on ray tracing algorithms. For this reason, it is difficult to operate sound propagation algorithms in mobile environments. To solve this problem, we processed the early reflection and late reverberation steps in parallel and verified the performance in four scenes based on eight sound sources. The experimental results showed that the performance of the proposed method was on average 1.77 times better than that of the single-threaded method, demonstrating that our algorithm can improve the performance of mobile devices.
Introduction
Recently, as interest in blockchain/metaverse/XR/VR/MR has increased [1,2], more research to improve the sense of reality and immersion has been conducted. However, many studies have focused only on the visual element. To improve immersion in virtual environments or multimedia applications, high quality auditory as well as visual elements is essential [3], and sound rendering provides users with a higher quality of auditory elements by giving them a better understanding of intuitive spatial cues.
Sound rendering, which produces high-quality audio, consists of two steps: sound propagation and auralization. The former deals with the propagation of sound waves in virtual space, creating impulse responses (IRs) that are encoded with direction, delay, and frequency-dependent attenuation from a source to a listener. The latter convolves prerecorded or synthetically generated dry audio with IRs to generate the final audio signal and output it to an output device such as speakers or headphones.
In general, the sound propagation stage has the highest computational cost and requires the most resources in the entire sound rendering process [4]. There are two main ways to do this. One is a wave-based numerical method and the other is a geometric acoustic (GA) method. The wave-based numerical method numerically calculates wave equations in the time domain [5] or frequency domain [6]. In this way, the computational cost increases exponentially as the scene size and frequencies increase. Although it has the advantage of being able to generate realistic sounds, it is not suitable for real-time applications because it requires considerable time and computing power [7].
In contrast, the GA method uses ray, beam, or frustum tracing to find valid propagation paths, such as direct, transmission, reflection, and diffraction paths, between a listener and a sound source and to estimate multiple reverberation parameters according to space (e.g., size, absorption coefficients). IRs are calculated using the information finally computed through these processes. Therefore, the GA method is suitable for interactive applications because it has a relatively fast processing speed compared to the wave-based numerical Sensors 2023, 23, 973 2 of 17 method and can track moving source-moving receiver (MS-MR) and geometry scene data at every frame.
Most of the current studies employing the GA method use the high computational power on the PC platform to accelerate the sound propagation algorithm, thereby achieving real-time rates (e.g., 30 fps) [8,9]. However, it is very challenging to perform sound propagation algorithms at real-time rates in mobile devices with computing power and memory constraints [10].
Moreover, numerous studies use excessive CPU (four cores or more) or GPU resources only for the sound propagation algorithm. A sound propagation algorithm that uses many CPU cores makes it difficult to process tasks other than sound propagation, which is impractical. Likewise, a sound propagation algorithm using a GPU is unfeasible in real-time applications such as games because it is difficult to use GPU resources for visual rendering.
For the above reasons, a sound propagation algorithm in a mobile device environment must be processed based on a CPU, and when a multi-core method is used to accelerate this, only a minimum amount of resources should be added to deliver sufficient resources to other tasks. This study contributes by presenting a practical multi-threaded algorithm for accelerating sound rendering in a mobile device environment. For this purpose, three methods are included.
First, we used Guide mode (G mode) to find combinations of hit-triangles likely to generate valid paths by shooting multiple rays from the listener. This is the basis for creating multithreaded algorithms. Second, we parallelized Early Reflection mode (ER mode), which handles early reflection, and Late Reverberation mode (LR mode), which handles late reverberation.
Since this method uses only two threads and does not continuously maintain CPU utilization, the memory usage and CPU utilization increase rates were not large. Finally, we showed a thread synchronization scheme suitable for our algorithm. Through this, we solved the race condition problem that occurs during parallel processing.
We implemented this on a Galaxy 20+ smartphone using a Qualcomm Snapdragon 865 chipset equipped with the Adreno 650 GPU. We verified the performance by increasing the number of sound sources in various scenes. As a result, the performance of the proposed multithreaded method was about 1.77 times better on average than that of the singlethreaded method. In addition, the increase rates (%) of the proposed method (memory usage, CPU utilization) were 1.07 and 0.87 on average, respectively, compared to the singlethreaded method. This shows that our algorithm can be easily applied to the mobile device environment.
Related Work
This section gives an overview of sound propagation algorithms in the last few decades and their components.
Sound Propagation
Wave-based numerical methods calculate IRs by solving wave equations, which are usually second-order partial differential equations. Although such methods are accurate, they are very slow, so they are not suitable for interactive applications. Despite the fact that many studies have been conducted to accelerate the algorithm to solve this problem [11][12][13][14], considerable time and resources are still required, and the conditions remain limited.
The GA method has also been studied extensively. It has covered large scenes with many objects based on beam [15], frustum [16], or ray [17] tracing Among these, the ray tracing technique has recently been developed in both software and hardware. Therefore, most sound propagation algorithms supporting dynamic scenes are proposed based on ray tracing [18].
Various techniques to accelerate ray tracing-based GA algorithms have been proposed. The source clustering method, which combines sound sources under certain conditions to process many sound sources, improved the performance of the sound propagation algorithm by about 1.9 times based on 200 sound sources [8]. Backward ray tracing, which shoots rays from the listener rather than the sound source, lowered the cost of sound propagation sub-linearly [17]. A visibility graph to handle high-order reflection and diffraction was put forward [19]. An algorithm for quickly finding high-order diffraction paths using the A* pathfinding algorithm was put forth and showed to be about 568 times faster performance than the existing state-of-the-art method [20].
Acceleration methods using strong computing power have also been proposed, including a method of accelerating by assigning a thread to each sound source [6] by using a mixture of a CPU and a GPU [21] and by using a GPU [22]. However, the above methods utilize the powerful computing power of commodity CPUs or GPUs on a PC platform and as a result, maximize the corresponding computing resources. Hence, they utilize too many computing resources for sound rendering. In particular, sound rendering methods using GPUs are impractical because they take away resources for processing visual rendering. For these reasons, they are unsuitable for mobile devices with low computing power and low resources.
Sound Propagation Components
Sound propagation creates various sound effects through three components: direct sound, ER, and LR. Each component has different characteristics, which are the basis for creating various sound effects (see Figure 1). most sound propagation algorithms supporting dynamic scenes are proposed based on ray tracing [18]. Various techniques to accelerate ray tracing-based GA algorithms have been proposed. The source clustering method, which combines sound sources under certain conditions to process many sound sources, improved the performance of the sound propagation algorithm by about 1.9 times based on 200 sound sources [8]. Backward ray tracing, which shoots rays from the listener rather than the sound source, lowered the cost of sound propagation sub-linearly [17]. A visibility graph to handle high-order reflection and diffraction was put forward [19]. An algorithm for quickly finding high-order diffraction paths using the A* pathfinding algorithm was put forth and showed to be about 568 times faster performance than the existing state-of-the-art method [20].
Acceleration methods using strong computing power have also been proposed, including a method of accelerating by assigning a thread to each sound source [6] by using a mixture of a CPU and a GPU [21] and by using a GPU [22]. However, the above methods utilize the powerful computing power of commodity CPUs or GPUs on a PC platform and as a result, maximize the corresponding computing resources. Hence, they utilize too many computing resources for sound rendering. In particular, sound rendering methods using GPUs are impractical because they take away resources for processing visual rendering. For these reasons, they are unsuitable for mobile devices with low computing power and low resources.
Sound Propagation Components
Sound propagation creates various sound effects through three components: direct sound, ER, and LR. Each component has different characteristics, which are the basis for creating various sound effects (see Figure 1). Direct sound comes directly from the sound source to the listener and is the first of the components to arrive. Since it has the largest amplitude, it provides the maximum contribution to the distance and direction between the sound source and the listener.
ERs are the first echoes created after the arrival of the direct sound and are created through specular reflections and diffractions. LR is a very dense group of echoes and arrives last. It is created through high-order specular reflections or diffuse reflections.
ER and LR provide important perceptual cues about the space around the user, and many studies have been conducted to develop them. Specular reflection has been modeled using ray tracing [23], approximate volume tracing [24], and the image source method [25]. Among these, the image source method is the most accurate, so it is widely used in specular reflection modeling. We adopt the image source method for specular reflection.
There are two major methods for modeling diffraction: the Biot-Tolstoy-Medwin (BTM) [26] and the Uniform Theory of Diffraction (UTD) methods [27]. The BTM is more accurate than the UTD because it handles finite diffracting edges. However, it is not suitable for interactive applications because of the large amount of calculation. On the other hand, UTD is modeled assuming infinite diffracting edges. It is less accurate than BTM, Direct sound comes directly from the sound source to the listener and is the first of the components to arrive. Since it has the largest amplitude, it provides the maximum contribution to the distance and direction between the sound source and the listener.
ERs are the first echoes created after the arrival of the direct sound and are created through specular reflections and diffractions. LR is a very dense group of echoes and arrives last. It is created through high-order specular reflections or diffuse reflections.
ER and LR provide important perceptual cues about the space around the user, and many studies have been conducted to develop them. Specular reflection has been modeled using ray tracing [23], approximate volume tracing [24], and the image source method [25]. Among these, the image source method is the most accurate, so it is widely used in specular reflection modeling. We adopt the image source method for specular reflection.
There are two major methods for modeling diffraction: the Biot-Tolstoy-Medwin (BTM) [26] and the Uniform Theory of Diffraction (UTD) methods [27]. The BTM is more accurate than the UTD because it handles finite diffracting edges. However, it is not suitable for interactive applications because of the large amount of calculation. On the other hand, UTD is modeled assuming infinite diffracting edges. It is less accurate than BTM, but it is fast enough to be applied in interactive applications. For this reason, we adopt the UTD method for the diffraction.
Diffuse reflection is modeled using ray tracing [28], path tracing [29], and radiosity [30]. Since this generally requires a large amount of computation, it is not suitable for a mobile device environment.
Processing Flow and Analysis of Sound Rendering
This section introduces the sound rendering pipeline (Section 3.1) and the singlethreaded sound propagation algorithm, which is the basis of the proposed algorithm (Section 3.2) and performance analysis (Section 3.3). Figure 2 shows the proposed sound rendering pipeline. It has two threads: a main thread that finds a valid path according to the location of the sound source and listener and calculates IRs and an auralization thread that creates the final sound using the IRs.
Sound Rendering Pipeline
but it is fast enough to be applied in interactive applications. For this reason, we adopt the UTD method for the diffraction.
Diffuse reflection is modeled using ray tracing [28], path tracing [29], and radiosity [30]. Since this generally requires a large amount of computation, it is not suitable for a mobile device environment.
Processing Flow and Analysis of Sound Rendering
This section introduces the sound rendering pipeline (Section 3.1) and the singlethreaded sound propagation algorithm, which is the basis of the proposed algorithm (Section 3.2) and performance analysis (Section 3.3). Figure 2 shows the proposed sound rendering pipeline. It has two threads: a main thread that finds a valid path according to the location of the sound source and listener and calculates IRs and an auralization thread that creates the final sound using the IRs. The main thread first imports scene data, such as geometry data and audio files, and then creates an acceleration structure (AS) such as a kd-tree or BVH for static objects through preprocessing. We adopt the AS as a kd-tree for fast search.
Sound Rendering Pipeline
The auralization thread reads dry audio (PCM) as needed for every frame of the audio files imported by the main thread. Then, IRs received from sound propagation and the dry audio are convoluted to generate the final output signal, which is output to an output device (speakers or headphones). The above process is repeated through an auralization loop.
Single-Threaded Sound Propagation Algorithm
The proposed algorithm is implemented based on a single-threaded sound propagation algorithm. It is GA-based and uses ray tracing algorithms to create sound effects such as ER or LR. The ER is created through specular reflections (up to four-order) based on the image source method and edge diffractions (up to two-order) based on the UTD, and the LR is created through specular reflections (four-order). Figure 3 shows a flowchart of the single thread sound propagation algorithm, images of the sound propagation modes included in the algorithm, and ray tracing processing in each mode.
The sound propagation algorithm is processed in the order of build acceleration structure, PathCache mode (PC mode), direct/transmission mode (DT mode), ER mode, and LR mode. Each step is as follows. The main thread first imports scene data, such as geometry data and audio files, and then creates an acceleration structure (AS) such as a kd-tree or BVH for static objects through preprocessing. We adopt the AS as a kd-tree for fast search.
The auralization thread reads dry audio (PCM) as needed for every frame of the audio files imported by the main thread. Then, IRs received from sound propagation and the dry audio are convoluted to generate the final output signal, which is output to an output device (speakers or headphones). The above process is repeated through an auralization loop.
Single-Threaded Sound Propagation Algorithm
The proposed algorithm is implemented based on a single-threaded sound propagation algorithm. It is GA-based and uses ray tracing algorithms to create sound effects such as ER or LR. The ER is created through specular reflections (up to four-order) based on the image source method and edge diffractions (up to two-order) based on the UTD, and the LR is created through specular reflections (four-order). Figure 3 shows a flowchart of the single thread sound propagation algorithm, images of the sound propagation modes included in the algorithm, and ray tracing processing in each mode.
The sound propagation algorithm is processed in the order of build acceleration structure, PathCache mode (PC mode), direct/transmission mode (DT mode), ER mode, and LR mode. Each step is as follows. First, build acceleration structure build acceleration structure is a step of updating the kd-tree for dynamic objects. This enables the sound propagation algorithm to process dynamic scenes. Next, the sound propagation modes are performed. Those are the steps to create sound effects through ray tracing processing and include PC mode, DT mode, ER mode, and LR mode.
The PC mode is a step of finding valid reflection or diffraction paths in the current frame through propagation path caching. In other words, this process searches for valid paths in a path-cache-buffer where valid paths found in the previous frame are stored based on the location of the changed sound source and listener in the current frame.
Ray tracing algorithms create frame coherency issues due to the random directionality of the rays. To avoid such issues, propagation path caching is used in many interactive sound propagation algorithms [6,15].
DT, ER, and LR modes are steps for generating direct sound, ER, and LR, respectively. They find valid paths through the ray tracing processing and repeat the number of sound sources, the number of guide rays shot from listeners, and the number of source rays shot from sound sources, respectively.
All rays are processed through the ray tracing processing, in the order of ray generation, traversal and intersection (TnI), propagation path validation (PPV), and IR calculation. This is repeated for the maximum depth of the ray defined in the sound propagation. Each processing step is as follows. First, build acceleration structure build acceleration structure is a step of updating the kd-tree for dynamic objects. This enables the sound propagation algorithm to process dynamic scenes. Next, the sound propagation modes are performed. Those are the steps to create sound effects through ray tracing processing and include PC mode, DT mode, ER mode, and LR mode.
The PC mode is a step of finding valid reflection or diffraction paths in the current frame through propagation path caching. In other words, this process searches for valid paths in a path-cache-buffer where valid paths found in the previous frame are stored based on the location of the changed sound source and listener in the current frame.
Ray tracing algorithms create frame coherency issues due to the random directionality of the rays. To avoid such issues, propagation path caching is used in many interactive sound propagation algorithms [6,15].
DT, ER, and LR modes are steps for generating direct sound, ER, and LR, respectively. They find valid paths through the ray tracing processing and repeat the number of sound sources, the number of guide rays shot from listeners, and the number of source rays shot from sound sources, respectively.
All rays are processed through the ray tracing processing, in the order of ray generation, traversal and intersection (TnI), propagation path validation (PPV), and IR calculation. This is repeated for the maximum depth of the ray defined in the sound propagation. Each processing step is as follows.
Ray generation generates guide rays in PC and ER modes and source rays in DT and LR mode through random spherical sampling. TnI performs traversal to find hit-triangles using guide and source ray and then ray-triangle intersection tests. If the intersection tests are successful, PPV is conducted.
PPV finds valid paths through the validation test, as shown in Figure 4, based on the hit-triangles found by TnI. Then, the IR calculation describes the propagation effect by calculating the IRs between the sound source and the listener. It supports four frequency bands (0-250 Hz, 250-100 Hz, 1000-2000 Hz, 2000-4000 Hz) for each listener-source pair. Ray generation generates guide rays in PC and ER modes and source rays in DT and LR mode through random spherical sampling. TnI performs traversal to find hit-triangles using guide and source ray and then ray-triangle intersection tests. If the intersection tests are successful, PPV is conducted.
PPV finds valid paths through the validation test, as shown in Figure 4, based on the hit-triangles found by TnI. Then, the IR calculation describes the propagation effect by calculating the IRs between the sound source and the listener. It supports four frequency bands (0-250 Hz, 250-100 Hz, 1000-2000 Hz, 2000-4000 Hz) for each listener-source pair. IRs of sound propagation modes have attenuation parameters of direction, delay, and frequency. The delay is calculated by dividing the length of the paths by the sound velocity, and the attenuation parameters are calculated by accumulating attenuation based on distance and frequency dependent wall absorption coefficients.
LRs' IRs require additional parameters. We employ the widely used Eyring model [31] as the LR model. The parameters of this model are the volume of the room, the total absorbing surface area of the room, and the average absorption coefficient of the surfaces. They are computed using hit-triangles found by the guide and source rays. The IRs with the above information encoded are passed to the auralization thread to create the final sound.
Performance Analysis of Sound Propagation Modes
To effectively accelerate the sound propagation algorithm, it is essential to find which part of the existing single-threaded sound propagation algorithm is the bottleneck. To do so, we analyzed the performance of the sound propagation modes, which are the core of the sound propagation algorithm.
We performed the sound propagation algorithm in four scenes as shown in Figure 5 with a Galaxy 20+ smartphone using a Qualcomm Snapdragon 865 chipset equipped with the Adreno 650 GPU. In addition, we used eight static sound sources to increase the performance load, and shot 1024 guide and source rays, respectively. IRs of sound propagation modes have attenuation parameters of direction, delay, and frequency. The delay is calculated by dividing the length of the paths by the sound velocity, and the attenuation parameters are calculated by accumulating attenuation based on distance and frequency dependent wall absorption coefficients.
LRs' IRs require additional parameters. We employ the widely used Eyring model [31] as the LR model. The parameters of this model are the volume of the room, the total absorbing surface area of the room, and the average absorption coefficient of the surfaces. They are computed using hit-triangles found by the guide and source rays. The IRs with the above information encoded are passed to the auralization thread to create the final sound.
Performance Analysis of Sound Propagation Modes
To effectively accelerate the sound propagation algorithm, it is essential to find which part of the existing single-threaded sound propagation algorithm is the bottleneck. To do so, we analyzed the performance of the sound propagation modes, which are the core of the sound propagation algorithm.
We performed the sound propagation algorithm in four scenes as shown in Figure 5 with a Galaxy 20+ smartphone using a Qualcomm Snapdragon 865 chipset equipped Table 1 shows the performance of each sound propagation mode for eight sound sources. All the scenes spend a lot of time in ER and LR modes and relatively little time in PC and DT modes. Since more than 96% of the total time is spent in ER and LR modes, they clearly have many bottlenecks. For this reason, it is essential to accelerate the corresponding modes to improve the performance of the sound propagation algorithm, and we propose a multi-threaded sound propagation algorithm to overcome this problem.
Proposed Multi-Threaded Sound Propagation Algorithm
This section introduces the proposed multi-threaded-based techniques and structures to improve the performance of sound propagation algorithms. For this purpose, additional and modified sound propagation modes (Section 4.1) and synchronization methods (Section 4.2) are described.
Multi-Threaded Sound Propagation Algorithm
To apply GA-based sound rendering to interactive applications, it is very important to improve the performance of the sound propagation algorithm. However, since sound propagation algorithms are generally implemented based on ray tracing, it is very challenging to do so.
In particular, the cost of ER and LR increases rapidly with the number of valid paths and sound sources, which makes it much more difficult for them to perform at real-time rates. We propose a multi-threaded sound propagation algorithm to improve its performance.
Our basic idea is to accelerate the algorithm by performing multi-threaded ER and LR. For this, the single-threaded sound propagation algorithm is modified, and a new sound propagation mode is added to enable multi-threaded execution. Figure 6 shows the flowchart of the proposed multi-threaded sound propagation algorithm. This is executed in the order of build acceleration structure, DT mode, and G mode, and then ER and LR modes are processed in parallel through two threads. Finally, the IRs from the two parallelized modes are merged through merge IRs. This is finally delivered to an auralization thread, and the algorithm is terminated. Table 1 shows the performance of each sound propagation mode for eight sound sources. All the scenes spend a lot of time in ER and LR modes and relatively little time in PC and DT modes. Since more than 96% of the total time is spent in ER and LR modes, they clearly have many bottlenecks. For this reason, it is essential to accelerate the corresponding modes to improve the performance of the sound propagation algorithm, and we propose a multi-threaded sound propagation algorithm to overcome this problem.
Proposed Multi-Threaded Sound Propagation Algorithm
This section introduces the proposed multi-threaded-based techniques and structures to improve the performance of sound propagation algorithms. For this purpose, additional and modified sound propagation modes (Section 4.1) and synchronization methods (Section 4.2) are described.
Multi-Threaded Sound Propagation Algorithm
To apply GA-based sound rendering to interactive applications, it is very important to improve the performance of the sound propagation algorithm. However, since sound propagation algorithms are generally implemented based on ray tracing, it is very challenging to do so.
In particular, the cost of ER and LR increases rapidly with the number of valid paths and sound sources, which makes it much more difficult for them to perform at realtime rates. We propose a multi-threaded sound propagation algorithm to improve its performance.
Our basic idea is to accelerate the algorithm by performing multi-threaded ER and LR. For this, the single-threaded sound propagation algorithm is modified, and a new sound propagation mode is added to enable multi-threaded execution. Figure 6 shows the flowchart of the proposed multi-threaded sound propagation algorithm. This is executed in the order of build acceleration structure, DT mode, and G mode, and then ER and LR modes are processed in parallel through two threads. Finally, the IRs from the two parallelized modes are merged through merge IRs. This is finally delivered to an auralization thread, and the algorithm is terminated. The proposed algorithm has three newly proposed techniques for the parallelization of ER and LR. First, G mode, a key mode for parallelizing the sound propagation algorithm, is newly added. The goal of G mode is to find combinations of hit-triangles that are likely to be valid paths around the listener.
G mode has two stages: Step 01, which finds combinations of hit-triangles, and Step 02, which sorts the found combinations and removes duplicate elements (See Figure 7). The detailed process is as follows. G mode shoots as many rays as the maximum number of guide rays set by the user to find combinations. The origin of the ray is set to the position of the listener, and the direction of the ray is calculated through spherical random sampling. Next, through ray tracing processing, G mode finds combinations of hit-triangles based on the ray. Then, the found combinations are stored in the combinations buffer.
Based on the found combinations, a sort is performed for each depth using a mergesort based on an index of hit-triangles in the combinations. Then, duplicate combinations are removed by looping as many times as the number of combinations. The combinations made in this way are delivered to ER and LR modes. The pseudocode for G mode is summarized in Algorithm 1. The proposed algorithm has three newly proposed techniques for the parallelization of ER and LR. First, G mode, a key mode for parallelizing the sound propagation algorithm, is newly added. The goal of G mode is to find combinations of hit-triangles that are likely to be valid paths around the listener.
G mode has two stages: Step 01, which finds combinations of hit-triangles, and Step 02, which sorts the found combinations and removes duplicate elements (See Figure 7). The detailed process is as follows. The proposed algorithm has three newly proposed techniques for the parallelization of ER and LR. First, G mode, a key mode for parallelizing the sound propagation algorithm, is newly added. The goal of G mode is to find combinations of hit-triangles that are likely to be valid paths around the listener.
G mode has two stages: Step 01, which finds combinations of hit-triangles, and Step 02, which sorts the found combinations and removes duplicate elements (See Figure 7). The detailed process is as follows. G mode shoots as many rays as the maximum number of guide rays set by the user to find combinations. The origin of the ray is set to the position of the listener, and the direction of the ray is calculated through spherical random sampling. Next, through ray tracing processing, G mode finds combinations of hit-triangles based on the ray. Then, the found combinations are stored in the combinations buffer.
Based on the found combinations, a sort is performed for each depth using a mergesort based on an index of hit-triangles in the combinations. Then, duplicate combinations are removed by looping as many times as the number of combinations. The combinations made in this way are delivered to ER and LR modes. The pseudocode for G mode is summarized in Algorithm 1. G mode shoots as many rays as the maximum number of guide rays set by the user to find combinations. The origin of the ray is set to the position of the listener, and the direction of the ray is calculated through spherical random sampling. Next, through ray tracing processing, G mode finds combinations of hit-triangles based on the ray. Then, the found combinations are stored in the combinations buffer.
Based on the found combinations, a sort is performed for each depth using a mergesort based on an index of hit-triangles in the combinations. Then, duplicate combinations are removed by looping as many times as the number of combinations. The combinations made in this way are delivered to ER and LR modes. The pseudocode for G mode is summarized in Algorithm 1. Step 01: Finds combination of hit-triangles 8: for R ∈ {R 0 , · · · R n−1 } do 9: R ← Set origin position (position of L) and random direction 10: CHT ← Ray tracing processing (R, TB) 11: if CHT is valid then 12: CB ← Add CHT 13: if CHT i is equal to CHT j then 21: j ← j + 1 22: else 23: Remove from CHT i+1 to CHT j−1 24: i ← j 25: j ← j + 1 26: end if 27: end for 28: end procedure Second, the ray tracing processing of ER and LR modes is changed, and PC mode is removed. In particular, ER mode typically finds valid paths while performing an amount of work in proportion to the maximum number of guide rays. However, the proposed method precalculates combinations of hit-triangles that are likely to be valid paths in G mode. For this reason, the ray tracing processing of the modes used in single-threaded algorithm is not suitable for our multi-threaded method, so it needs to be modified. The work processed in PC mode is processed by the newly added merge-hit-triangles in G mode and setup-hit-triangles in ER mode. Figure 8 shows the flowchart of ER mode. It calculates additional information for calculating IR based on the combinations of hit-triangles generated by G mode and then generates IRs for ER. The steps of the processing are changed compared to the singlethreaded method: a setup-hit-triangles step is added, and ray generation and TnI steps are removed because combinations of triangles are presearched in G mode.
The detailed processing process of setup-hit-triangles is as follows. It receives C n (combinations of triangles) imported from G mode where 0 ≤ n ≤ N (number of combinations)-1, L (listener), and S (sound source) serve as input.
First, a merge-sort is performed on the combinations in C n and the combinations in a path-cache-buffer of S. This is the same as Step 02 seen in G mode through which duplicate combinations are removed. The detailed processing process of setup-hit-triangles is as follows. It receives (combinations of triangles) imported from G mode where 0 n N (number of combinations)-1, L (listener), and S (sound source) serve as input.
First, a merge-sort is performed on the combinations in and the combinations in a path-cache-buffer of S. This is the same as Step 02 seen in G mode through which duplicate combinations are removed.
Then, additional information is calculated for T (triangles) in each combination while looping through . To complete this, setup-hit-triangles determines the type for each T. The type variable indicates what kind of path will be created and includes reflection, diffraction, and none. If S is positioned toward the normal side of T, the type of S is reflection; otherwise, it is diffraction. If T is invalid, the type of T is determined to be none.
Through this, setup-hit-triangles determines what information needs to be additionally calculated for T. If T's type is reflection, setup-hit-triangles calculates listener mirror positions for the image source method. Conversely, if it is diffraction, it computes edges information (edge point, edge vector) for UTD. The pseudocode for setup-hit-triangles is summarized in Algorithm 2. Then, additional information is calculated for T (triangles) in each combination while looping through C n . To complete this, setup-hit-triangles determines the type for each T. The type variable indicates what kind of path will be created and includes reflection, diffraction, and none. If S is positioned toward the normal side of T, the type of S is reflection; otherwise, it is diffraction. If T is invalid, the type of T is determined to be none.
Through this, setup-hit-triangles determines what information needs to be additionally calculated for T. If T's type is reflection, setup-hit-triangles calculates listener mirror positions for the image source method. Conversely, if it is diffraction, it computes edges information (edge point, edge vector) for UTD. The pseudocode for setup-hit-triangles is summarized in Algorithm 2. for C ∈ {C 0 , · · · C n−1 } do 10: for T ∈ {T 0 , · · · T 3 } do 11: Then, PPV in ray tracing processing finds a valid path among combinations as in Figure 4. After that, it is processed in the same way as the single-threaded algorithm. Through this, the IR for ER is created and passed to the merge-IRs step in Figure 6.
LR mode does not differ significantly from the existing single-threaded method, but the method of generating combinations for calculating IR is slightly different (see Figure 9). Since the single-threaded method proceeds sequentially, IRs are calculated immediately whenever source rays are shot one by one in LR mode after the combination for the listener is calculated in ER mode.
23: end procedure
Then, PPV in ray tracing processing finds a valid path among combinations as in Figure 4. After that, it is processed in the same way as the single-threaded algorithm. Through this, the IR for ER is created and passed to the merge-IRs step in Figure 6.
LR mode does not differ significantly from the existing single-threaded method, but the method of generating combinations for calculating IR is slightly different (see Figure 9). Since the single-threaded method proceeds sequentially, IRs are calculated immediately whenever source rays are shot one by one in LR mode after the combination for the listener is calculated in ER mode. However, in the multi-threaded method, since ER and LR mode are divided into two threads, IRs cannot be calculated immediately in LR mode. Thus, when a valid path (combination of hit-triangles) is found by PPV, it temporarily stores the valid path without calculating the IR immediately.
In addition, when ray tracing processing is finished, combinations are merged through the merge-hit-triangle step as in G mode (Step 02) based on the combinations by LR mode and combinations imported from G mode. At this time, the same triangles included in both combinations are designated as triangles that can be contributed to the Eyring model, and IRs are calculated based on the triangles.
The final change is to separate the IRs memory for ER and LR mode. Multi-threaded algorithms cause data races due to shared resources. To prevent this, a synchronization lock such as a mutex is required, but the cost of such a lock degrades the performance of the algorithm.
To reduce this cost, we remove locks for the synchronization in the IR buffer that stores IRs in each thread, and separate buffers for ER and LR to store IRs. If two threads create IRs and store them in respective IR buffers, the IRs in the two buffers are merged through the merge-IR step, as shown in Figure 6.
Thread Synchronization
As the proposed algorithm performs parallel processing through two threads, thread synchronization is essential. We use three functions (Wait, SetEvent, ResetEvent) for thread synchronization. Wait (Object) is a function that waits until a specific event object becomes true. Set/ResetEvent (Object) are functions that change the signal of an event object to true/false. For example, if there is an event object called T0 and Wait (T0) is called, the thread waits until SetEvent (T0) is called. Conversely, ResetEvent (T0) blocks the corresponding thread.
We perform thread synchronization as shown in Figure 10. Thread01 performs DT, G and ER modes and thread02 performs LR mode. We divide LR mode into ray tracing However, in the multi-threaded method, since ER and LR mode are divided into two threads, IRs cannot be calculated immediately in LR mode. Thus, when a valid path (combination of hit-triangles) is found by PPV, it temporarily stores the valid path without calculating the IR immediately.
In addition, when ray tracing processing is finished, combinations are merged through the merge-hit-triangle step as in G mode (Step 02) based on the combinations by LR mode and combinations imported from G mode. At this time, the same triangles included in both combinations are designated as triangles that can be contributed to the Eyring model, and IRs are calculated based on the triangles.
The final change is to separate the IRs memory for ER and LR mode. Multi-threaded algorithms cause data races due to shared resources. To prevent this, a synchronization lock such as a mutex is required, but the cost of such a lock degrades the performance of the algorithm.
To reduce this cost, we remove locks for the synchronization in the IR buffer that stores IRs in each thread, and separate buffers for ER and LR to store IRs. If two threads create IRs and store them in respective IR buffers, the IRs in the two buffers are merged through the merge-IR step, as shown in Figure 6.
Thread Synchronization
As the proposed algorithm performs parallel processing through two threads, thread synchronization is essential. We use three functions (Wait, SetEvent, ResetEvent) for thread synchronization. Wait (Object) is a function that waits until a specific event object becomes true. Set/ResetEvent (Object) are functions that change the signal of an event object to true/false. For example, if there is an event object called T0 and Wait (T0) is called, the thread waits until SetEvent (T0) is called. Conversely, ResetEvent (T0) blocks the corresponding thread.
We perform thread synchronization as shown in Figure 10. Thread01 performs DT, G and ER modes and thread02 performs LR mode. We divide LR mode into ray tracing processing for LR and (merge-hit-triangles + IR calculation) to increase the parallelism of the algorithm. processing for LR and (merge-hit-triangles + IR calculation) to increase the parallelism of the algorithm. Thread02 starts after SetEvent (LR0) is called on thread01. Then, thread02 waits until G mode finishes. If SetEvent (LR1) is called in thread01, merge-hit-triangles and IR calculation are performed in thread02. In thread01, merge-IRs are executed when IR calculation of LR is finished.
Experimental Results
This section introduces the experimental environment and settings (Section 5.1) and describes the experiments performed to determine the appropriate number of rays for load-balancing of the proposed algorithm (Section 5.2). In addition, it evaluates the performance of the proposed multi-threaded algorithm through a performance comparison with the single-threaded algorithm (Section 5.3) and assesses algorithm overhead by determining the memory usage and CPU utilization of single-threaded and multi-threaded algorithms (Section 5.4).
Experimental Setup
We implemented the sound propagation algorithm in the form of a native plug-in (.so, .dll) and connected it to the Unity game engine to conduct experiments (see Figure 11). The performance of the sound propagation algorithm varies greatly depending on the ray depths and the number of triangles and valid paths, which are inherently changed by the characteristics of the scenes. Thread02 starts after SetEvent (LR0) is called on thread01. Then, thread02 waits until G mode finishes. If SetEvent (LR1) is called in thread01, merge-hit-triangles and IR calculation are performed in thread02. In thread01, merge-IRs are executed when IR calculation of LR is finished.
Experimental Results
This section introduces the experimental environment and settings (Section 5.1) and describes the experiments performed to determine the appropriate number of rays for loadbalancing of the proposed algorithm (Section 5.2). In addition, it evaluates the performance of the proposed multi-threaded algorithm through a performance comparison with the single-threaded algorithm (Section 5.3) and assesses algorithm overhead by determining the memory usage and CPU utilization of single-threaded and multi-threaded algorithms (Section 5.4).
Experimental Setup
We implemented the sound propagation algorithm in the form of a native plug-in (.so, .dll) and connected it to the Unity game engine to conduct experiments (see Figure 11). The performance of the sound propagation algorithm varies greatly depending on the ray depths and the number of triangles and valid paths, which are inherently changed by the characteristics of the scenes. Sensors 2023, 23, x FOR PEER REVIEW 13 of 17 Figure 11. Sound propagation algorithm running on Galaxy S20+.
For this reason, as shown in Figure 5, we adopted two indoor scenes and two hybrid scenes mixed with indoor and outdoor. Sibenik, concerthall, and angrybot scenes are static scenes, and racelake is a dynamic scene. We conducted experiments with the sound source and listener stopped to give a performance load to the sound propagation algorithm, and the experiment device was a Galaxy S20+.
Load-Balancing
When the sound propagation algorithm-based ray tracing shoots more rays, it finds more valid paths, making it more likely to generate appropriate audio that matches the visual rendering. However, shooting a large number of rays (10k, 100k) degrades sound rendering performance.
In addition, in a multi-threaded algorithm, appropriate load-balancing between threads performing tasks is essential. We needed to appropriately adjust the number of rays that most affect the performance of the two threads to find the optimal load-balancing in our algorithm. Thus, we conducted an experiment to find an appropriate ratio between the number of guide rays used in G and ER modes and the number of source rays used in LR mode.
We set the number of sound sources, the number of guide rays, and the maximum depth to 8, 1024, and 4, respectively, in the Sibenik scene, which is the worst case of the scenes. After that, we measured the increase rate of the performance of the multi-threaded algorithm compared to that of the single-threaded algorithm by increasing the number of source rays (64 to 4096) for each sound source (see Figure 12). The load-balancing of the two threads improved with the performance increase. For this reason, as shown in Figure 5, we adopted two indoor scenes and two hybrid scenes mixed with indoor and outdoor. Sibenik, concerthall, and angrybot scenes are static scenes, and racelake is a dynamic scene. We conducted experiments with the sound source and listener stopped to give a performance load to the sound propagation algorithm, and the experiment device was a Galaxy S20+.
Load-Balancing
When the sound propagation algorithm-based ray tracing shoots more rays, it finds more valid paths, making it more likely to generate appropriate audio that matches the visual rendering. However, shooting a large number of rays (10k, 100k) degrades sound rendering performance.
In addition, in a multi-threaded algorithm, appropriate load-balancing between threads performing tasks is essential. We needed to appropriately adjust the number of rays that most affect the performance of the two threads to find the optimal load-balancing in our algorithm. Thus, we conducted an experiment to find an appropriate ratio between the number of guide rays used in G and ER modes and the number of source rays used in LR mode.
We set the number of sound sources, the number of guide rays, and the maximum depth to 8, 1024, and 4, respectively, in the Sibenik scene, which is the worst case of the scenes. After that, we measured the increase rate of the performance of the multi-threaded algorithm compared to that of the single-threaded algorithm by increasing the number of source rays (64 to 4096) for each sound source (see Figure 12). The load-balancing of the two threads improved with the performance increase. For this reason, as shown in Figure 5, we adopted two indoor scenes and two hybrid scenes mixed with indoor and outdoor. Sibenik, concerthall, and angrybot scenes are static scenes, and racelake is a dynamic scene. We conducted experiments with the sound source and listener stopped to give a performance load to the sound propagation algorithm, and the experiment device was a Galaxy S20+.
Load-Balancing
When the sound propagation algorithm-based ray tracing shoots more rays, it finds more valid paths, making it more likely to generate appropriate audio that matches the visual rendering. However, shooting a large number of rays (10k, 100k) degrades sound rendering performance.
In addition, in a multi-threaded algorithm, appropriate load-balancing between threads performing tasks is essential. We needed to appropriately adjust the number of rays that most affect the performance of the two threads to find the optimal load-balancing in our algorithm. Thus, we conducted an experiment to find an appropriate ratio between the number of guide rays used in G and ER modes and the number of source rays used in LR mode.
We set the number of sound sources, the number of guide rays, and the maximum depth to 8, 1024, and 4, respectively, in the Sibenik scene, which is the worst case of the scenes. After that, we measured the increase rate of the performance of the multi-threaded algorithm compared to that of the single-threaded algorithm by increasing the number of source rays (64 to 4096) for each sound source (see Figure 12). The load-balancing of the two threads improved with the performance increase. Figure 12. Increase rate of multi-threaded performance compared to single-threaded performance. Figure 12. Increase rate of multi-threaded performance compared to single-threaded performance.
The experimental results showed that the performance increase rate gradually rose when the number of source rays went from 64 to 1024, and the performance increase rate was the highest when the number of source rays was 1024.
This means that when the number of source rays is less than 1024, LR mode must wait for a certain time until ER mode is finished because the throughput of LR mode is greater than that of ER mode. This waiting time causes performance degradation.
Conversely, when the number of source rays is 1024 to 4096, the throughput of LR mode is higher than that of ER mode. Because of this, ER mode must wait for a certain time until LR mode is finished, so the rate of increase in performance gradually decreases. That is, the proposed algorithm shows the best performance and the best load-balancing when the ratio of the number of guide rays to the total number of source rays is about 1:8 in the worst case. Table 2 shows the performance comparison of the single-threaded and multi-threaded algorithms for the four scenes. We set the maximum depth, the number of guide rays, and the number of source rays to 4, 1024, and 1024, respectively. We measured the number of valid reflection and diffraction paths and the average frame time for 100 frames while increasing the number of sound sources (1, 2, 4, and 8) for each scene. The experimental results showed that the performance increase rate for each scene based on 8 sound sources was 84.96% in sibenik, 104.67% in concerthall, 54.95% in angrybot, and 64.46% in racelake. These showed that the performance of the proposed multi-threaded method was on average 77.26% better than the of the single-threaded method.
Memory Usage and CPU Utilization
We measured CPU utilization and memory usage to assess the overhead introduced by our multithreaded algorithm and compared it to that of the single-threaded algorithm. We used Snapdragon Profiler as the program for measurement.
In the case of the CPU utilization experiment, we fixed the FPS of the two comparison groups for a fair experiment and set the maximum depth, number of sound sources, number of source rays, and number of guide rays to 4, 8, 1024, and 1024, respectively. We then measured the average CPU utilization for 30 s. Table 3 shows the average CPU utilization and the difference between the singlethreaded and multi-threaded algorithms. The experimental results showed that the CPU utilization of the single-threaded algorithm was lower than that of the multi-threaded algorithm in all scenes. However, the difference (%) between the two utilizations was 1.00, 1.40, 0.38, and 0.71 in sibenik, concerthall, angrybot, and racelake, respectively. In other words, our algorithm does not use many CPU resources even though it uses a multi-threaded method. This is because it uses only two threads and does not constantly use CPU resources.
In the case of the memory usage experiment, we used the same conditions as in the CPU utilization experiment and measured only the memory usage used in the sound propagation algorithms for 30 s while increasing the number of sound sources (1)(2)(3)(4)(5)(6)(7)(8) in sibenik. Table 4 shows the memory usage and the difference between the two algorithms. The experimental results showed that the difference in memory usage (MB) was 1.80, 1.49, 0.63, and 0.39, respectively, depending on the number of sound sources (1)(2)(3)(4)(5)(6)(7)(8). This can be said to have low memory overhead because multi-threaded algorithms do not increase memory usage significantly. As can be seen from the above experimental results, the proposed multi-threaded algorithm not only has higher performance than the single-threaded method, but also is more suitable for the mobile device environment as it minimizes the increase in memory usage and CPU utilization.
Conclusions
This paper proposed a multi-threaded sound propagation algorithm to improve the performance of sound propagation algorithms in mobile devices. To achieve this, we mainly used three methods. First, we performed what is called G mode for parallel task processing. This enabled ER and LR modes to perform in parallel by finding combinations of hit-triangles likely to create valid paths by shooting multiple rays from the listener.
Second, we split the processing into two threads: ER mode, which produces early reflection, and LR mode, which produces late reverberation. Finally, we solved the problem of the race condition by applying a suitable thread synchronization technique.
Based on this, we showed that the two modes can be simultaneously processed in parallel to improve the performance of the sound propagation algorithm. In addition, since this method uses only two threads and does not increase the memory usage or CPU utilization rate compared to the single-threaded method, we found that it is suitable for application in the mobile device environment.
We verified the performance, memory usage, and CPU utilization of the proposed algorithm in various scenes. The experimental results showed that the performance of the multi-threaded method was about 1.77 times better than that of the single-threaded method. Moreover, the average increase rates (%) in terms of memory usage and CPU utilization of the multi-threaded algorithm were 1.07 and 0.87. These increase rates were with negligible overhead, indicating no burden of additional overhead. That is, it showed that our algorithm is suitable for application in a mobile device environment and exhibits a certain increase in performance.
|
v3-fos-license
|
2021-11-18T16:13:41.339Z
|
2021-01-01T00:00:00.000
|
244290068
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1109/access.2021.3128695",
"pdf_hash": "87c5346415d72f1a1d184ea9443a8d126cbf0185",
"pdf_src": "IEEE",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:783",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"sha1": "ded573600f5656a75a1ff82dd5a0318b07569caf",
"year": 2021
}
|
pes2o/s2orc
|
Clustering of Similar Historical Alarm Subsequences in Industrial Control Systems Using Alarm Series and Characteristic Coactivations
Alarm flood similarity analysis (AFSA) methods are frequently used as a primary step for root-cause analysis, alarm flood pattern mining, and online operator support. AFSA methods have been promoted in several research activities in recent years. However, addressing an often-observed ambiguity of the order of alarms and the annunciation of irrelevant alarms in otherwise similar alarm subsequences remains a challenging task. To address and solve these limitations, this paper presents a novel AFSA method that uses alarm series as input to two extended term frequency-inverse document frequency (TF-IDF)-based clustering approaches, a dimensionality reduction technique, and a novel outlier validation. The method proposed here utilizes both characteristic alarm variables and their coactivations, thus, emphasizing the dynamic properties of alarms to a greater extent. Our method is compared to three relevant methods from the literature. The effectiveness and performance of the examined methods are illustrated by means of an openly accessible dataset based on the “Tennessee-Eastman-Process”. It is shown that the integration of alarm series data improves the overall performance and robustness of the AFSA. Furthermore, the clustering results are less influenced by the ambiguity of the order of alarms and irrelevant alarms, thus overcoming a persistent challenge in alarm management research.
I. INTRODUCTION
Driven by the advances in automation technologies, industrial process plants have become data intense. The amount of data being processed and stored, e.g., time series readings from sensors and alarm logs, can sum up to hundreds of gigabytes every year [17]. This data provides a potential for data mining (DM) and machine learning (ML), to better understand plant behavior and thereby take better operator decisions.
In process control systems, alarms are raised to warn operators about critical process deviations when a predefined critical threshold value at a field sensor is exceeded. Ideally, the number of alarms raised at a time should be as low as possible. However, in more anomalous situations there can The associate editor coordinating the review of this manuscript and approving it for publication was Yiqi Liu . be a high number of alarms that becomes difficult to handle, which is a known challenge in the industry and literature, typically referred to as alarm floods [3].
In situations of alarm floods a simple sequential handling of alarms may not be the most practical approach, due to the limited time available to resolve the critical plant situation, but also because the various alarms cannot be handled in isolation but have dependencies. In many cases, alarms of an alarm flood were triggered by a common root-cause [13]. Here, an interesting use case for DM and ML is to extract the implicit knowledge about historic alarm situations. There is opportunity that such an automatic learning could save time for a human having to learn some of the non-obvious rules and patterns from experience over years. DM-and ML-based operator support functions could be imagined as part of the process control system, that could explain to the operator a recurrent alarm flood and thereby save the operator time in VOLUME 9, 2021 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ the decision-making process, where a manual assessment of complex alarm floods can be time consuming.
In this article a novel approach is presented for the analysis and clustering of similar and recurrent alarm floods, that makes use of insights about the dynamic properties of alarms and their coactivations. The proposed approach was compared to existing methods using an openly accessible alarm dataset. It is observed that the proposed approach leads to more accurate and meaningful clusters than if having left the intrinsic knowledge about the dynamic structure of the alarm sequences in the data unconsidered.
This paper is organized as follows: Section II analyzes the related work. Section III describes the development of a novel approach. In Section IV an in-depth evaluation and comparison of the methods in Sections II and III is conducted. Finally, this paper concludes with a discussion of the evaluation results and an outlook on potential future work in Section V.
In reference [20] we proposed a preliminary method for the analysis and clustering of similar and recurrent alarm floods. This method was presented and discussed at the ''32nd International Workshop on Principles of Diagnosis'' in Hamburg, Germany, in September 2021. The approach proposed in this paper considerably advances our previously presented research. By applying a suitable and carefully selected dimensionality reduction technique, we were able to successfully solve a major limitation of our preliminary method, i.e., a high dimensionality and computational complexity when clustering similar alarm floods. Moreover, our additional processing step is shown to have a distinct positive effect on the accuracy of the found clustering solution. These improvements make our novel method more feasible for industrial practitioners and ML researchers from academia.
II. RELATED WORK
A comprehensive overview of the existing alarm data analysis approaches is given in [18]. One major branch is alarm flood similarity analysis (AFSA) methods, which detect and group recurrent historical alarm flood situations or, more generally, alarm subsequences (ASs) [18]. Here, ASs are smaller partitions of an original alarm sequence [2], [32]. The unsupervised task of grouping or clustering similar historical ASs aims at finding ASs that are associated with similar abnormal situations. In this context, the alarm data of historical ASs is processed using a suitable similarity measure. AS clusters are then formed by finding those groups of ASs that are more similar to each other than compared to ASs in other clusters [18]. AFSA methods thus allow for the collection of different variants of otherwise similar abnormal situations, which can improve further analysis steps [7].
Most commonly, AFSA methods are used for alarm rationalization or to generate the input for advanced alarm analysis methods [18]. For example, in [7], [10], and [27], clusters of similar ASs are subject to a causal analysis to detect common root-cause disturbances. This information can then be used online to support the operator with suggestions regarding the most likely root-cause disturbance of a recurring AS [10]. Reference [5] defined two requirements (R1 and R2) regarding the similarity analysis of ASs: 1) A suitable method should tolerate irrelevant alarms annunciated in some ASs. 2) A suitable method should tolerate a swapped order of alarm activations (ACTs) in otherwise similar ASs.
One category of AFSA approaches applies ''frequent pattern mining'' (FPM) methods to sequences of ACTs. For example, [8] and [32] use FPM to detect the most relevant combinations of alarms in historical alarm data. However, these methods are restricted to alarm clusters that have minimum support in the data, i.e., either the absolute or relative frequency, and thus, they show limitations when an abnormal situation is uncommon.
Another category that is promoted in several research activities is the pairwise alignment of ASs. For this purpose, [2] proposed a global sequence alignment method using the dynamic time warping (DTW) algorithm to detect common alarm patterns. Prior to that, a prefiltering step groups potentially similar ASs according to the Jaccard-distance of AS pairs (s. (7)). However, DTW does not tolerate any ambiguity of order in otherwise similar ASs. This challenging task was to some extent solved by [5], in which a local sequence alignment was used that allows for a certain ambiguity of order if the alarms are close in time. It introduced the modified Smith-Waterman (MSW) algorithm, which is considered a prevailing benchmark in the AFSA literature [18]. The MSW algorithm generates a similarity matrix, which is used as the input for an agglomerative hierarchical clustering approach with a single-linkage (AHC-SL) to cluster similar ASs [5]. One limitation arises from the penalization of alarms in one AS that could not possibly be aligned with a matching counterpart in another AS. A disagreement on the number of ACTs in two ASs therefore negatively affects their similarity, thus making the MSW approach less robust to irrelevant alarms. Reference [27] proposed an improved version of the MSW algorithm by applying a filtering step based on the Jaccard-distance, as described in [2]. Henceforth, this method is referred to as MSW-J. Further alignment approaches were presented that aimed at reducing the computational effort required to carry out the MSW approach [11] and that applied alarm priority information as a primary similarity indicator [14].
A third category of AFSA methods is string metrics, which are based on distance or similarity measures [18]. For example, in addition to its utilization in the pre-or postprocessing of AS pairs, the Jaccard-distance was also used in [7] and [9] as a primary measure for the clustering of similar ASs. It considers only the binary activity of alarm variables (AVs), which are the unique identifiers of configured alarms, and not the number or order of ACTs and is therefore robust to any ambiguity in both. However, the Jaccard-distance overrates the similarity between two ASs that share common alarms but have considerable disagreement in their respective dynamics.
Henceforth, this method is referred to as J . Another string metric is the Levenshtein-distance, which uses the number of edits, i.e., insertion, deletion, and substitution of ACTs, that are needed for the transformation of one AS into another AS [9]. It shares some properties with the DTW in [2] and therefore has limitations if ACTs are annunciated in a swapped order. Another promising AFSA string metric, proposed in [9], uses the term frequency-inverse document frequency (TF-IDF) for the pairwise comparison of ASs. The TF-IDF is a frequently utilized measure in natural language processing that applies a bag-of-words model, i.e., a simplified representation of the alarms in an AS that does not consider their order but rather their quantity. Moreover, a unique feature of the TF-IDF is its weighting of the relevance of AVs according to their probability of occurrence with regard to all ASs. Eventually, similar ASs are clustered using the ''density-based spatial clustering of applications with noise'' (DBSCAN) [28]. Reference [9] demonstrated that this method generates robust and meaningful results compared to other methods, especially when Jaccard-distancebased postprocessing is applied. Henceforth, this method is referred to as T-A-J. It was also used in [10] as a primary step for the causal analysis of ASs. However, it is less robust to irrelevant ACTs of AVs with a high weight.
In conclusion, the data-driven AFSA approaches described here show some deficits in fulfilling both requirements R1 and R2. Moreover, most of these approaches use fixed alarm rates and time windows to detect ASs in historical data, e.g., in [2], [5], [9], [10], and [27], which could result in important alarms or ASs being missed [19]. This deficiency justifies the proposal of a novel method that is robust against both order ambiguity and some irrelevant ACTs while still considering relevant aspects of the AS's dynamic structure.
It was further shown in [18] that all of the existing AFSA methods share the common property of using an alarm sequence representation as input, i.e., a sequence of alarm instances ordered by their ACT times. However, [18] also examined two research areas that are similar to the idea of AFSA, namely, alarm similarity analysis and online alarm flood classification. The former examines the correlation between AVs, and the latter identifies known AS patterns in incoming alarm floods [18]. In both areas, several approaches have demonstrated that using alarm series, i.e., alarm data represented as time series, can be beneficial and produce more meaningful results, e.g., in [18] and [33], than when using only alarm activations. Moreover, [18] illustrated the advantages of using alarm coactivations for alarm analysis, i.e., two or more AVs that are active at the same time.
III. PROPOSED APPROACH A. OVERVIEW OF THE PROPOSED APPROACH
Based on the findings in Section II, this paper proposes an improvement to the promising T-A-J approach in [9] that aims at meeting the requirements R1 and R2. The improvement is achieved by using two novel TF-IDF-based AS clustering methods that utilize alarm series data for the analysis of individual AVs (T-S-J) and their coactivations (T-C-P-J). Here, each configured alarm, e.g., a high-or low-alarm, is denoted by an individual AV. Finally, the postprocessed clustering results from T-S-J and T-C-P-J are merged by a novel validation step that focuses on the detected AS outliers. 1 shows the general structure of the proposed ''alarm series similarity analysis method'' (ASSAM) using the ''formalized process description'' given in [31]. The process operators (green rectangles) and generated and processed information (blue hexagons) are described in detail below. T-S-J is specified by process operators O1.1, O1.2, O1.5, O1.6, O1.7, and O1.8 and results in I1.11, whereas T-C-P-J is defined by O1.1, O1.3, O1.4, O1.5, O1.6, O1.7 and O1.7 and generates the output I1.12.
B. DETAILS OF THE PROPOSED APPROACH
The ASSAM starts with O1.1 and a set of historical ASs (I1.1), which were obtained using the ''alarm coactivation and event detection method'' (ACEDM) proposed in [19]. The ACEDM uses a ''median absolute deviation''-based outlier detection in time distances between alarm events to find ASs. It was shown that the ACEDM is more precise and robust in detecting coherent abnormal situations than are methods that use arbitrary alarm rate-thresholds. The ASSAM uses a time series representation of alarm data, i.e., a binary alarm series VOLUME 9, 2021 for each AV α i [15]: where T i is the set of times t in which α i is active. Trivial ASs with only one active AV are eliminated. Moreover, to reduce the computational effort in the following steps, only those AVs that are active at least once in any of the subsequences are selected (I1.2). The time series for the coactivation of two AVs α i and α j can be represented as follows (following [18]): To calculate S ij for all possible α i and α j in an ASs, AVs must have an identical sampling rate and an identical number of samples. Here, only those AV pairs that are coactive at least once in any of the analyzed ASs are selected (I1.2).
In O1.2 (T-S-J) and O1.3 (T-C-P-J) the TF-IDF is then computed to weight AVs and their pairwise coactivations, respectively, for each alarm subsequence AS (following [9]): with the ''term frequency'': and the ''inverse document frequency'': where a is either an AV (T-S-J) or a pair of AVs (T-C-P-J), |S a | AS is the number of samples in which a is active in AS, and AS is the set of all ASs. The pairwise consideration of AVs in the TF-IDF vectors of T-C-P-J (I1.4) implicates a potentially high dimensionality. Furthermore, a single AV can have an excessive impact on the TF-IDF representation of an AS; i.e., it is considered in numerous elements of the TF-IDF vector. In fact, this kind of high-dimensional and redundant data representation increases the computational effort necessary for clustering similar ASs and can potentially negatively affect any found clustering solution [23], [25]. In the related research area of clustering similar textual documents, this limitation was addressed by applying a suitable dimensionality reduction technique on the TF-IDF vectors; i.e., a transformation into a relatively low-dimensional and less redundant representation.
For example, two frequently applied linear dimensionality reduction techniques are the ''singular value decomposition'' (SVD) and the ''principal component analysis'' (PCA) [16], [22], [23]. An in-depth evaluation and comparison of both was conducted in [23]. It was shown that the PCA has some advantages over the SVD in cases where the target dimensionality of the TF-IDF vectors is relatively small. Thus, for the ASSAM presented here, we propose to use the PCA for dimensionality reduction.
The desired transformation from the n-dimensional TF-IDF vectors into a k-dimensional target projection, where k < n, is achieved by using the top k eigenvectors of the covariance matrix. These eigenvectors correspond to the largest eigenvalues and account for a descending proportion of the variance of the original TF-IDF vectors. To estimate a suitable value for k a ''cumulative proportion of variance'' threshold τ CPV can be used, which allows for enough eigenvectors to be retained so as to maintain a variance of at least τ CPV of the original TF-IDF vectors [1], [30]. A detailed description of the PCA can be either found in [1] or [30].
Subsequently, a suitable distance measure is used to calculate the distances between any two alarm subsequences AS i and AS j in O1.5. According to [28], the DBSCAN clustering algorithm can be combined with any distance measure that is consistent with the analyzed domain and data. For example, [25] used the cosine distance measure for clustering similar textual documents. Here, we follow the proposal given in [9], where the Euclidean distance measure was applied to the AFSA domain and showed promising results. It can be calculated as follows [9]: where m is the total number of features in the TF-IDF vectors. Finally, both resulting distance matrices I1.6 (T-S-J) and I1.7 (T-C-P-J) are normalized to the range 0 to 1.
Identical to T-A-J, the AS distance matrices are postprocessed here. This step aims to reduce spurious low distances between ASs that share only a small number of active AVs [2]. In O1.6, the Jaccard distances for all AS pairs are calculated using the following formula (following [9]): where n xor ij is the number of AVs that are exclusively active in either AS i or AS j and n or ij is the number of AVs that are active in any of the two ASs. The resulting Jaccard distance matrix (I1.8) is then used in O1.7 for the postprocessing of I1.6 and I1.7. Each distance value in the postprocessed distance matrices I1.9 and I1.10 can be calculated as follows [9]: where τ Jac is the Jaccard-distance threshold that determines whether an AS pair is considered potentially similar. In O1.8, both I1.9 and I1.10 are used to generate two partitions of AS using DBSCAN. Reference [9] demonstrated the feasibility of utilizing DBSCAN when used for the clustering of ASs. It identifies regions of high density, i.e., ASs that are close to each other in terms of the distance. Clusters are identified by core points, where an AS is considered as such if at least (minPts − 1) other ASs are within a distance less than or equal to a threshold ε. ASs with no neighboring ASs in proximity are considered outliers. Two advantages of DBSCAN are its distinct outlier label and the absence of a manual specification of the number of clusters [28]. The resulting clustering solution can be represented as C = {c −1 , c 0 , c 1 , . . . , c n }, where c i depicts the ith cluster and c −1 groups all detected outliers. Here, T-S-J and T-C-P-J generate C S (I1.11) and C C (I1.12), respectively.
It can be assumed that C S and C C differ to some extent. In fact, preliminary tests have suggested that for some situations, one of the two chosen criteria can have advantages over the other and result in more meaningful clusters. To benefit from both, we propose a novel step (O1.9) that aims at validating the outliers in T-S-J (I1.11) by using T-C-P-J (I1.12). The former is used as the basis here since preliminary performance results have indicated that it is more robust to different settings of ε. The concept of the proposed approach is the following: for each outlier in c S −1 , the corresponding label in C C is analyzed. If T-C-P-J considers this AS as an outlier as well, it is labeled as such in the validated clustering solution C SC (I1.13). If, however, the AS is part of c C i with i ≥ 0, the outlier label in T-S-J is considered potentially erroneous. Next, we try to find the best match for c C i in C S . One way to achieve this is to compare c C i to each regular cluster in C S using a similarity measure. Here, we propose using the Braun-Blanquet formula for the calculation of the similarity s BB ij between two clusters c i and c j . It can be calculated as follows [26]: where n ij denotes the number of shared ASs in both clusters and |c i | and c j represent the number of ASs in c i and c j , respectively. Of all clusters in C S with a similarity greater than or equal to a validation threshold τ Val , the one with the highest similarity to c C i is considered the best match, i.e., c S j . Eventually, the former outlier is clustered inĉ SC j . Otherwise, it remains an outlier and is grouped inĉ SC −1 . Moreover, all nonoutlier cluster labels inĈ SC are assigned according to the cluster labels in C S .
C. DISCUSSION OF THE LIMITATIONS AND ADVANTAGES OF THE PROPOSED APPROACH
One limitation of the ASSAM arises from the computational effort necessary for the calculation of T-C-P-J; i.e., the coactivation of each AV pair needs to be determined for each sample and AS. Furthermore, as T-C-P-J considers only AV pairs, the implicit knowledge of more complex alarm coactivation dynamics possibly remains undiscovered.
Nevertheless, the ASSAM shows relevant advantages compared to state-of-the-art methods. Swapped alarm orders and a varying number of ACTs in similar abnormal situations can be characteristic of real-world industrial processes [5]. The proposed utilization of time series data in AFSA expands the view to the dynamic properties of activated AVs and the dynamic structure of the underlying ASs instead of focusing on a point-to-point examination of sequenced ACTs. In fact, the calculation of the TF in (4) is not affected by the order or number of ACTs. Moreover, randomly activated short alarms that are irrelevant for the situation have only a small impact due to the consideration of the number of active samples in (4). Hence, the proposed ASSAM and its components T-S-J and T-C-P-J fully satisfy the requirements R1 and R2.
IV. EVALUATION
This section evaluates and compares the performances and characteristics of three relevant AFSA methods described in Section II and the method proposed in Section III. Subsection IV.A gives a brief overview of the evaluation dataset used. Subsection IV.B deals with choosing a suitable evaluation measure. Subsection IV.C describes the experimental setup. The obtained evaluation results are presented in Subsection IV.D.
A. EVALUATION DATASET
The examined clustering methods are applied to the openly accessible simulation dataset 1 introduced in [19]. It is based on a simulation model of the ''Tennessee-Eastman-Process'' (TEP), a frequently used benchmark in process automation [4], [6]. It can be separated into five modules: a two-phase chemical reactor, a condenser, a vapor-liquid separator, a stripper, and a reboiler. Furthermore, the TEP includes 11 automatic pneumatic control valves, two pumps, and one compressor [6]. The alarm system of the TEP defines 81 lowalarm and 81 high-alarm thresholds as well as five high-highalarm and three low-low-alarm thresholds [19].
The dataset includes 100 simulation runs with 300 specified abnormal situations. These situations were designed using eight different root-cause disturbances with variations in their respective durations, disturbance scaling, and combinations. These variations as well as random influences affect the number of activated AVs, the order of alarm instances, and their dynamic behavior. The alarm system generates a total of 7343 alarm instances over all 300 situations [19]. Fig. 2 illustrates an example subset of 18 AV time trends for a typical simulation run with three consecutive abnormal situations, where the third abnormal situation rapidly escalates into an emergency shutdown of the TEP. The alarm data in this dataset is represented using a single multivalued alarm series for each process variable (XMEAS), i.e., time series readings from a specific sensor, and each manipulated variable (XMV), i.e., time series readings from a specific pneumatic valve. For each AV, the effective alarm state at a time is constituted using one out of five unambiguous integer values, e.g., high-and low-alarms are represented using the values ''1'' and ''-1'', respectively [18], [19]. To render the application of the proposed ASSAM possible, we need to transform the multivalued alarm series into a binary representation according to (1).
The application of the ACEDM on the TEP dataset results in 358 detected ASs, of which 310 ASs show more than one FIGURE 2. Three example consecutive abnormal situations (abn. sit.). Solid blue lines represent the time trends of alarm variables. The lower level for each alarm variable represents a low alarm, and the higher level represents a high alarm. Red dotted lines represent the initiation of a root-cause disturbance. Green dashed-dotted lines represent the return to a normal operation (following [19]). alarm instance. The latter are used as the preprocessed input for all methods examined here, thus being able to specifically compare the performances of the selected AFSA methods. One advantage of the TEP simulation dataset is that all induced abnormal situations are explicitly known [19], thus making it possible to use an external validity index, which compares the computed clusters to a given ground-truth partition [26]. The 310 preprocessed ASs are therefore manually assigned to 21 ground-truth clusters according to the details described in [19] and the technical report of the dataset. Each cluster includes 4 to 30 similar ASs. Furthermore, 14 ASs are labeled outliers, as they contain only random parts of the respective underlying abnormal situation and show no similarities to any other ASs.
B. EXTERNAL VALIDITY INDEX
For evaluation, a suitable external validity index needs to be chosen. A frequently used index in cluster evaluation [29], which evaluates the agreement of a ground-truth partition C 0 and a computed trial partition C 1 [26], is the adjusted Randindex (ARI) [15]: where a (d) is the number of AS pairs that are in the same (different) cluster in both partitions and b (c) is the number of AS pairs that are in the same cluster in C 0 (C 1 ) but in different clusters in C 1 (C 0 ). If C 0 and C 1 are identical, the ARI yields a value of 1. A value of 0 arises in the case where C 0 and C 1 are statistically independent [29]. A detailed analysis of the ARI can be found in [29].
C. EXPERIMENTAL SETUP
An overview of the methods examined here is given in Table 1. Two methods, J and MSW-J, are used as benchmarks for the evaluation of the TF-IDF-based methods, namely, T-A-J, the proposed ASSAM, and its components T-S-J and T-C-P-J. In addition, some of these methods are compared to versions of them that do not use the dimensionality reduction step in process operator O1.4 (s. Fig. 1), namely, T-C, or the postprocessing step in process operator O1.9, namely, T-A, T-S, and T-C-P. This evaluation approach allows for a systematic and in-depth examination of the effectiveness of the ASSAM and its components. Except for MSW-J, the examined methods utilize the DBSCAN clustering algorithm. For MSW-J, the algorithm parameters were set according to [5], i.e., δ = −0.4, µ = −0.6, and σ 2 = 4. The τ Jac for MSW-J, T-A-J, T-S-J, and T-C-P-J was set to 0.4, as suggested in [2]. Based on preliminary tests, the τ Val of the ASSAM was set to 0.5. For the minPts parameter of DBSCAN integer values between 3 and 30 were examined. The latter describes the size of the largest ground-truth cluster. The chosen range includes the default value of minPts = 4 as recommended [28]. For the distance threshold of the AHC-SL and ε of DBSCAN values between 0.001 and 1.000 with a step size of 0.001 were assessed since all resulting distance matrices are normalized to the range 0 to 1. For the evaluation of the ASSAM, both components T-S-J and T-C-P-J used the same minPts and ε due to the assumption that the individual tuning of two parameter settings would be cumbersome in an industrial application. In addition, the τ CPV of the PCA was set to 98%. This setting follows the findings given [30].
All methods examined here were implemented in Python (Version 3.8.5). Additional software libraries that were used are NumPy [12] (Version 1.21.3), Pandas [21] (Version 1.3.4), and Scikit-learn [24] (Version 0.24.2). The executable code of the ASSAM as well as the reported evaluation results and the used ground-truth partition are openly accessible. 2
D. EVALUATION RESULTS
For each method, the highest ARI value, which was obtained by applying all considered parameter settings, is shown in Fig. 3. J, T-A, and T-A-J, which do not consider ACT durations or their order, show the lowest ARI values of all examined methods. Indeed, in some cases, these three methods detected similarities between ASs that are in different ground-truth clusters and arose from different root-causes, thus resulting in fewer, though larger, computed clusters, i.e., 13 clusters for J with an optimal minPts of 4 and an optimal ε of 0.191. By using an optimal minPts of 3 and an optimal ε of 0.081, both T-A and T-A-J labeled 32 outliers, which represents the highest number for all examined methods. The corresponding ASs were characterized by random variations in the number of ACTs of those AVs with a high value for the IDF vector. In contrast, the consideration of the order of ACTs with ambiguity to short-term variations in MSW-J resulted in a higher ARI value. MSW-J detected 20 clusters and 24 outliers using an optimal distance threshold of 0.276. An in-depth inspection of the obtained results revealed that MSW-J was not always able to distinguish between significant variations for the same root-causes. Moreover, the detected outliers differed considerably from those in the given ground-truth; i.e., MSW-J was not always able to find similarities between two ASs with identical ground-truth cluster labels in cases where both disagreed on the number of ACTs.
The proposed methods T-S-J and T-C-P-J, as well as the alternative versions T-S, T-C, and T-C-P, showed an improved performance compared to that of the existing methods. All proposed methods presented an optimal minPts value of 3. The proposed application of the dimensionality reduction using the PCA in T-C-P and T-C-P-J yields a reduced TF-IDF vector with 17 features; i.e., a reduction of more than 99% compared to T-C. Moreover, the optimal ARI values of T-C and T-C-P in Fig. 3 reveal that this additional processing step improved the overall clustering performance by more than 5%.
Both T-S-J and T-C-P-J were able to detect 23 clusters as well as 12 and 11 outliers with as few as 17 and 18 mislabeled ASs, respectively. An in-depth inspection of the cluster labels resulting from T-S-J and T-C-P-J revealed that they are essentially identical except for five ASs, which mainly stem from two different abnormal situations. Interestingly, in both cases, one of the methods classified two of the subsequences as outliers, whereas the other method classified them correctly according to the ground-truth cluster labels.
The application of both T-S-J and T-C-P-J and the subsequent validation of outliers in the ASSAM were shown to result in more meaningful clusters; i.e., only 16 ASs were mislabeled, which resembles the ground-truth best. This finding was also supported by the ASSAM yielding an ARI value superior to that of all other examined methods. The optimal values for minPts and ε for the ASSAM were 3 and 0.095, respectively. Another significant phenomenon revealed in Fig. 3 is that the postprocessing of the TF-IDF-based methods was beneficial regarding the optimal ARI value. This phenomenon is further analyzed in Fig. 4 and 5. 4 illustrates the heatmaps of the distance matrices for the TF-IDF-based methods. The ASs in the columns and rows are ordered by the ground-truth cluster labels. This allows for the visual evaluation of the distance measures used. A trial partition identical to the ground-truth is characterized by dark colored blocks along the diagonal of the distance matrix. In contrast, undesired similarities between different VOLUME 9, 2021 ground-truth clusters appear as dark colored off-diagonal blocks. Fig. 4 (a), (b), and (c) show the distances without the application of the postprocessing step. The distance matrix of T-A in Fig. 4 (a) contains some erroneously high distances between ASs that have the same ground-truth cluster label and numerous spuriously high similarities in terms of the offdiagonal blocks. Only one cluster presents a desirable high visual contrast. The corresponding ASs are characterized by only two continuously active AVs. The distance matrices of T-S and T-C-P in Fig. 4 (b) and (c) show a substantially higher visual contrast between blocks along the diagonal and in the off-diagonal areas than shown in Fig. 4 (a). The highest contrast can be found in Fig. 4 (b), which is reflected by T-S having the highest ARI value of all TF-IDF-based approaches without postprocessing. The lower performance and lower visual contrast of T-C-P in Fig. 4 (c) can possibly be explained by different abnormal situations showing a similar dynamic propagation behavior, in terms of coactive AVs, but relevant differences in their respective initial phase where no or only few coactivations occur; i.e., T-C-P is not able to distinguish between such abnormal situations using only coactivations. Fig. 4 (d), (e), and (f) show the computed distance matrices after the application of the postprocessing step. By assigning the highest distance value to most of the erroneous AS pairs, the resulting visual contrast shows high agreement with the cluster structure of the ground-truth. However, Fig. 4 (d) demonstrates that T-A-J yields low distance values for most of the remaining AS pairs, thus impeding the detection of the correct ground-truth clusters. In contrast, Fig. 4 (e) and (f) depict overall higher distances in the remaining off-diagonal pairs for T-S-J and T-C-P-J. This advantageous characteristic resulted in higher ARI values for both proposed components of the ASSAM.
The performance and the number of resulting clusters for the TF-IDF-based methods with a minPts value of 3 and over all considered settings of the DBSCAN parameter ε are illustrated in Fig. 5. The corresponding diagram for the ASSAM is similar to that of T-S-J and is therefore not depicted here. The comparison of Fig. 5 (a), (b), and (c) and Moreover, the close inspection of Fig. 5 indicates that the range of suitable values for ε, which results in ARI values close to the maximum, is approximately twice as long for T-S-J and T-C-P-J compared to T-S and T-C-P. In conclusion, the postprocessing step makes the proposed methods more robust to changes in ε and the clustering results more reliable in cases where an optimal ε cannot be determined using a ground-truth partition.
V. DISCUSSION AND CONCLUSION
The evaluation in Section IV showed that the existing AFSA methods are not able to meet the requirements defined in [5] (s. Section II) to the fullest extent. In fact, the in-depth examination revealed that the methods J, T-A, T-A-J, and MSW-J can handle a certain ambiguity of the order of alarms in two compared ASs (R1), whereas none of them could suitably tolerate irrelevant alarms occurring in one or both ASs (R2). These methods are therefore not able to correctly detect all underlying AS similarities. Despite this distinct limitation, the clustering results obtained by MSW-J showed a relatively high agreement with the given ground-truth of the TEP dataset used here. However, the MSW necessitates the cumbersome tuning of four interrelated parameters, i.e., δ, µ, σ 2 , and the distance threshold of the AHC-SL.
It was further demonstrated that the proposed TF-IDFbased method ASSAM as well as its components T-S-J and T-C-P-J can fulfill all given requirements. Moreover, the ASSAM achieves the best performance among all considered AS clustering methods. This result confirms the assumption that the clustering results can be improved when using alarm series data and alarm coactivations as input. Overall, the evaluation showed that clustering methods that consider the dynamic properties of activated AVs and the dynamic structure of the ASs consistently demonstrate a higher performance than that of methods that utilize a less extensive data input.
One limitation of the ASSAM results from its need for a relatively high computational effort using T-C-P-J; i.e., each sample in a subsequence needs to be analyzed on occurring pairwise alarm coactivations. In contrast, T-S-J maintains a relatively low computational burden. Another limitation results from the necessity of tuning the DBSCAN parameter ε. In this context, it was proven that the postprocessing step of T-S-J and T-C-P-J makes them and the ASSAM more robust to changes in the parameter settings than without postprocessing and compared to T-A-J. It is noteworthy that this beneficial characteristic of the ASSAM makes it more suitable for an industrial application where a priori knowledge for parameter tuning can be limited. Moreover, this finding substantiates the viability of the postprocessing step, as hypothesized in [9]. In addition, it was shown that the application of a suitable dimensionality reduction technique on the otherwise high-dimensional TF-IDF vectors of T-C-P-J significantly reduces the computational effort necessary for calculating the AS clustering and considerably improves the quality of the clustering results.
Furthermore, the evaluation indicated a high agreement between the clustering results of T-S-J and T-C-P-J. However, the data also showed that the proposed combined approach ASSAM has advantages over the individual methods. For industrial practitioners, we recommend using T-S-J in cases where a low computational burden is of relevance. In other cases, we propose using the ASSAM as intended. It is reasonable to assume that in processes similar to the TEP used here, this approach can produce more meaningful clustering results. Future studies should apply the proposed ASSAM and its components T-S-J and T-C-P-J to further industrial and experimental datasets. Moreover, it should be investigated whether modern machine learning methods, e.g., representation learning, can improve the analysis of similar historical ASs. For this purpose, further research efforts could evaluate whether a more extensive consideration of the alarm dynamics and an examination of the most significant subsets of coactive AVs are beneficial for the performance of the AFSA.
|
v3-fos-license
|
2020-10-19T18:11:44.904Z
|
2020-08-15T00:00:00.000
|
225025306
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://aiche.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/btm2.10189",
"pdf_hash": "4f981e07bf3d196d2a741c79bb8318f4d18d2759",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:784",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"sha1": "dd65db323cc2ce02aaf4d7e9f3f48222fc9916d6",
"year": 2020
}
|
pes2o/s2orc
|
Ultrasound‐microbubble cavitation facilitates adeno‐associated virus mediated cochlear gene transfection across the round‐window membrane
Abstract The round window of the cochlea provides an ideal route for delivering medicines and gene therapy reagents that can cross the round window membrane (RWM) into the inner ear. Recombinant adeno‐associated viruses (rAAVs) have several advantages and are recommended as viral vectors for gene transfection. However, rAAVs cannot cross an intact RWM. Consequently, ultrasound‐mediated microbubble (USMB) cavitation is potentially useful, because it can sonoporate the cell membranes, and increase their permeability to large molecules. The use of USMB cavitation for drug delivery across the RWM has been tested in a few animal studies but has not been used in the context of AAV‐mediated gene transfection. The currently available large size of the ultrasound probe appears to be a limiting factor in the application of this method to the RWM. In this study, we used home‐made ultrasound probe with a decreased diameter to 1.5 mm, which enabled the easy positioning of the probe close to the RWM. In guinea pigs, we used this probe to determine that (1) USMB cavitation caused limited damage to the outer surface layer or the RWM, (2) an eGFP‐gene carrying rAAV could effectively pass the USMB‐treated RWM and reliably transfect cochlear cells, and (3) the hearing function of the cochlea remained unchanged. Our results suggest that USMB cavitation of the RWM is a good method for rAAV‐mediated cochlear gene transfection with clear potential for clinical translation. We additionally discuss several advantages of the small probe size.
| INTRODUCTION
Gene transfection is a critical procedure in both genetic studies and gene therapy. Gene transfection methods can be divided into two categories: non-viral and viral. Viral methods of gene transfection are more efficient, despite recent rapid progress in non-viral gene transfection methods. [1][2][3][4][5][6][7][8][9][10] Among what have been tested, recombinant adeno-associated viruses (rAAV) exhibit clear advantages such as low immunogenicity, long-lasting transfected gene expression in various host cells, and non-exogenous DNA insertion into the genomes of transfected cells. 3 This viral vector has been used in the gene therapy studies of auditory system in animal models [11][12][13][14][15] and human trials. 11,16 The inner ear is highly isolated from surrounding organs and tissues. This unique feature makes it an ideal organ for genetic manipulation, with a low risk of side effects. However, this feature also makes it difficult to access. Generally, rAAV vectors must be injected into the inner ear, either via the round window membrane (RWM) or by cochleostomy. However, the injection disrupts the integrity of the inner ear, and might impair the hearing function.
The RWM has been explored as an approach to deliver drugs to the inner ears. 17,18 Unfortunately, the intact RWM is not permeable to rAAVs, 19 and therefore rAAV-mediated gene transfection via the RWM requires an injection. 16,[20][21][22] This barrier could be overcome by increasing the RWM permeability temporarily. In one of our previous studies, we reported that this could be realized by treating the RWM with digestive enzymes. 23,24 Consequently, temporary RWM damage allows the rAAV to diffuse across the RWM. Since the treatment itself does not cause hearing loss, this method has a potential in cochlear gene therapy for the purpose of protection. However, the effectiveness of this treatment varied among individual subjects, likely due to variations in the RWM thickness and local tissue reactions to the enzyme solution.
Ultrasound-mediated microbubble (USMB) cavitation can create small pores on the cell membrane (sonoporation). [25][26][27] This temporary injury significantly increases the permeability of the cell membrane to large molecules. 28,29 The wound created by the USMB cavitation is self-healable, 30 and therefore such treatments do not permanently impair the normal functions of the treated cells. In addition to medication delivery, 26,31 the use of USMB cavitation for gene transfection via plasmid DNA, siRNA, and miRNA has been investigated. 27,28,32 USMB-mediated AAV gene transfection in the rat retina has also been reported. 33 In that application, however, AAV was injected into the subretinal space before USMB was applied. Such an approach is not safe if applied in cochlear gene transfection.
Two studies have applied the USMB method for drug delivery via the RWM. 29,34 USMB effectively increased the permeability of the guinea pig RWM to large molecules such as biotin-FITC. 29 This method successfully facilitated the delivery of dexamethasone across the RWM and protected the cochleae against noise damage. 34 However, the US probes used in these studies had a diameter of 6 mm.
Such a large probe could not be inserted near the RWM even in human's ears. The long working distance needs a larger amount of working solution and a higher acoustic power, which may be potentially harmful.
In addition, no previous study has investigated the usefulness of this method for AAV-mediated gene transfection via the RWM. As viral vectors are highly expensive, MB-vector packaging or coadministration of virus with MB appears to be impractical. In addition, the packing of vectors into MBs may deteriorate the activity of the virus. Therefore, the viral vector must be administered after USMB was applied to the RWM. This required that the wound would not be sealed quickly. Up to date, there is no data whether the damage by USMB will last. In one study, the sonoporation created by a single shot of USMB was healed in seconds. In other study, RWM damage by USMB was observed with electron microscopy with information how long the wound will be recovered. 35 In this study, we developed a new ultrasound probe with a considerably smaller diameter (1.5 mm). By using this small probe, we were able to create intense, focalized damage to the RWM with a lower ultrasound power and a smaller amount of MB solution. The damage was limited to the outer epithelial layer of the RWM and lasted for more than a day. Effective eGFP gene transfection was observed when rAAV-eGFP was administered after USMB treatment.
Additionally, a new-generation rAAV vector (AAV2/Anc80L65) was used to get satisfactory transfection. 36,37 This approach should be useful for the future development of cochlear gene therapies and the translation to humans.
| Animals and research design
Twenty-seven 2-month-old male guinea pigs (albino Hartley) were obtained for this experiment from Shanghai Songlian Lab Animal Field (Shanghai, China) with body weight between 250 and 350 g. All animals passed Preyer's reflex test, an otoscope inspection and a baseline hearing evaluation with an auditory brainstem response (ABR) test.
The guinea pigs were then randomly assigned into six different groups To evaluate the structural changes in the RWM by USMB, the middle ear was filled with the fixative immediately after ultrasound treatment to fixed the RWM. The cochlea was further fixed after the animal was sacrificed. To evaluate AAV transfection, the animals were subjected to a repeated ABR after a 2-week interval prior to sacrifice.
The cochleae were harvested and treated, to investigate either the structure of the RWM or the transfection of AAV across the neuroepithelium. All the experimental procedures were approved by the Institutional Animal Care and Use Committee of the Shanghai Sixth People's Hospital affiliated to Shanghai Jiaotong University (permit number DWLL2017-0295).
| ABR recording
The animals were anesthetized via an intraperitoneal injection of ketamine and xylazine (40 and 10 mg/kg, respectively) and placed on a thermostatic heating pad to maintain the body temperature at~38 C.
The ABR tests were performed in an acoustically and electrically
| Surgery for gene transfection or RWM treatment
The subjects were anesthetized with inhaled isoflurane (4% for induction, 2% for maintenance, 0.3 L/min O 2 flow rate). The animal's head was placed in the lateral position and fixed with a stereotaxic restraint.
The body temperature was maintained using a thermostatic heating pad at 38 C. The animal was laid laterally, and the head was held in position using a custom-made holder ( Figure 3a). The tympanic bony bulla was exposed using a post-auricular approach. After administering local analgesia with lidocaine, a 2 cm arc incision was made along the root of the earlobe, and the mastoid was exposed via blunt dissection.
A hole with a diameter of 3-4 mm was made on the bulla to expose the RW niche and the bony cochlear wall. Next, the animal was laid in the lateral supine position, and the head orientation was adjusted such that the RW surface faced up ( Figure 3b). The ultrasound probe was inserted into the correct position against the RW niche with assistance from a manipulator. The lower edge of the probe front was placed on the RW niche ( Figure 3c). The estimated distance between the front surface of the probe lens and the RW was 0.5-1 mm (Figure 3c).
For the USMB treatment of the RWM, the ultrasound contrast agent (Definity, USA, DIN:02243173) was prepared and injected into the RW niche to fill the space between the probe lens and RWM completely.
The US generator was turned on to yield 5 min of sonication. After the US exposure, the MB solution was suctioned, and the middle ear cavity was irrigated with sterile saline and the residual solution was cleaned.
To observe the damage created by the USMB treatment, a fixative solution (2.5% glutaraldehyde) was used to fill the middle ear cavity immediately after washing. The animal was then sacrificed with an overdose of injected pentobarbital (100 mg/kg, i.p.). The animal was then decapitated under deep anesthesia, and the cochlea was harvested.
F I G U R E 1 Ultrasound probe and acoustic measurements. (a) Photograph of the tip of the finished ultrasound probe, showing the aluminum lens and copper layer. (b) Impedance and phase response curves. The resonance peak at 1.55 MHz (indicated by the arrow, i.e., phase peak) was targeted for probe activation by the pulse pattern. (c) Raw acoustic pressure waveform at a fixed distance of 5.8 mm from the probe after electrical activation. This distance allowed the complete distinction of the acoustic response from the electrical stimulus artifact. The probe was stimulated with five cycles of ±30 V square waves at the indicated frequencies. (d) The data from (c) are presented after low-pass filtering with a cutoff of 3.5 MHz. The greatest filtered amplitude was observed at 1.70 MHz, and this frequency was selected as the best MB cavitation frequency for the probe. (e) The peak negative pressure over a volume was recorded in response to a 1.70-MHz stimulus. The maximum axial slice is shown. (f) The data from E are shown after low-pass filtering of the waveform at each location as in (d) relative to (c) For AAV transfection via the RWM, a piece of gelfoam was placed in the RW niche after the US treatment. Ten microliters of an AAV solution were injected into the gelfoam. For AAV transfection via cochleostomy, a small hole (diameter: 0.3 mm) was drilled via the bone shell of the basal turn. Ten microliters of viral vector were injected into the scala tympani (rate: 20 nL/s) through a 34-gauge glass tip (microfil) connected to a picrosyringe pump (Micro4; WPI, Kissimmee) by a polyethylene tube. The cochleostomy hole was then sealed with muscle tissue, and the hole of the bulla was closed by suturing the muscle and skin.
The adapted AAV2/Anc80L65 backbone was similar to the vector in a previous report. 36,37 The rAAV vector was constructed to carry an AAV2 ITR-flanked genome encoding CAG-driven eGFP, a Woodchuck Hepatitis Virus Regulatory Element (WPRE) and a bovine Growth Hormone poly-adenylation site (Taitool Bioscience, China).
The vector was presented at a titer of 1.16 × 10 13 .
| Statistics
All data are expressed as means ± SE of the mean (SEM). ANOVAs followed by post hoc testing (Holm-Sidak method) were performed using SigmaPlot (ver. 14; Systat Software Inc., San Jose, CA). In all analyses, p < 0.05 was taken to indicate statistical significance. In all analyses, a p value <0.05 was considered to indicate statistical significance. In the RWM observed immediately after the USMB treatment, a focused region could be identified as being damaged, while the other regions appeared to be normal. The damaged region took approximately 1/3 of the total RWM area and located anteriorly. A square region was circulated in Figure 4d, which was magnified in E and F to show the detail of the damaged epithelial layer. The damaged cells frequently contained round-shaped, scar-like structures, which were likely the residuals of large MBs. Figure 4g is also a magnified image of Figure Figure 5 also showed the image of a control sample observed immediately in which RWM was only exposed to ultrasound but no microbubbles (Figure 5g-i), the epithelial layer of RWM was nearly intact in those images. (Figure 6d-f) however, the continuity of the outer epithelial was interrupted (as shown in the white cycle in Figure 6d). This interruption of intercellular continuity was not extended to the deeper layers.
| ABR threshold
ABRs were tested to examine the hearing threshold at the baseline (i.e., before surgery) and 2 weeks after the transfection surgery. The results in Figure 7 show that AAV delivery via the RWM after USMB treatment does not cause a shift in the ABR threshold (Figure 7a). In contrast, a small ABR threshold elevation was observed in subjects treated with cochleostomy, in which the post hoc pairwise test revealed a significant threshold shift at 16 kHz (Figure 7b) relative to the baseline (q = 3.336, p = 0.023). A significant between-group difference was revealed by a two-way ANOVA (F 1,48 = 6.391, p = 0.015).
| Short summary
In this study, we used a homemade ultrasound probe with a transducer diameter as small as 1.5 mm. When the probe was inserted against the RWM niche, we managed to maintain the structural integrity of middle ears necessary for good hearing.
USMB-mediated cavitation caused controllable, focused, and reversible RWM damage (Figures 4 and 5) that was limited to the outer epithelial layer (Figure 6). This treatment effectively increased the permeability of RWM to the rAAV, which could not normally pass across the RWM to transfect cochlear cells (Figure 8c,d), resulting in the satisfactory transfection of cochlear sensory cells (Figure 8a and 9) by using AAV2/Anc80L65. Although the RWM approach yielded a lower transfection rate than cochleostomy, the former approach did not affect the hearing thresholds of treated animals. The small probe size allowed us to insert the probe in touch with the RWM niche in the guinea pig ear. Therefore, the estimated distance between the probe lens and the RWM was within 0.5-1 mm (Figure 3c). At this distance, our tests indicated that the peak negative acoustic pressure delivered by our probe typically reached 0.3-0.5 MPa, and MI adjusted to 0.5. Placing the probe against the RW niche enabled a much better focus on the target and required a reduced device output, as much less energy was lost to attenuation.
The focalized damage to RWM was shown in both Figures 4d and 5a. In fact, in all the sample treated with USMB, the damage was limited to an oval shape region in the anterior 1/3 of RWM. This is well corresponding to the pointing direction of our probe as shown in
| USMB methods for cochlear drug delivery
Collectively, only two studies published by one group have addressed the use of USMB in drug delivery across the RWM. 29,34 In these reports, the ultrasound probe diameter was 6 mm. Moreover, the probe was placed outside the middle ear, which must then be filled fully with MB solution. The estimated distance between the front surface of the probe and the RWM was 5 mm. The acoustic intensity was 1-3 W/cm 2 , which corresponded to a MI of 0.147-0.283. In this setting, USMB considerably enhanced the transportation of biotin-FITC, which can permeate the intact RWM. The integrity of the RWM in response to this USMB treatment method was reported more recently in a separate paper by the same group. 35 4.4 | USMB in rAAV-mediated cochlear gene therapy US has long been recognized as a useful tool for targeting material delivery for therapeutic applications, including gene transfection (see reviews 26,31,32,[39][40][41]. MBs have been used as imaging enhancers since the 1990s. 41,42 Shortly thereafter, the application of MBs was extended to therapeutic areas. 43,44 Several potential mechanisms have been proposed to explain how USMB methods enhance cell permeability and drug uptake. Depending on the magnitude of the US driving pressure, the MB response may shift from linear spherical to nonlinear or nonspherical oscillations and eventually to inertial cavitation. 31 At a driving pressure greater than 300 kPa, the fluid inertia will overcome the pressure inside the MBs, resulting in bubble collapse and/or fragmentation. 45,46 The surrounding cells exposed to the shock waves and jet formation associated with cavitation can incur damage ranging from small and temporary pores (~1 μm in diameter), which heal quickly, 30,[47][48][49][50] to large damage (>10 μm) associated with cell death. 49
| MB selection and RWM damage
MBs typically have diameters of 1-10 μm and comprise a gas core and lipid shell. For clinical applications, several features of MBs, such as high biodegradability, low immunogenicity, sufficient flexibility and stability, are of concern. Regarding cochlear gene transfection, the ability of MB to create RWM damage that would be sufficiently persistent but healable is the major concern. MB properties such as the shell material, size, and concentration are important because each may affect the induction of inertial cavitation under ultrasonic exposure. 57,58 Lipids, proteins, polymers, or a combination of these materials have all been used in the shells of MBs. MBs coated with lipids are among the most interesting and frequently used formulations in studies associated with drug delivery. [59][60][61][62][63][64] The MB cavitation effect appears to be related to the size and total volume of MBs in the solution. One study reported that both the inertial cavitation dose and the BBB opening volume were positively correlated with the diameters of the MBs. 65 However, another study reported that the microbubble gas volume dose, not the size, determined the effect. 61
| Safety concerns
In this study, the intense damage caused by the application of USMB was limited to the RWM epithelial cells facing the tympanic cavity but was not extended to the deeper layers. In one of our previous reports, the RWM could be damaged using digestive enzymes. 24 In this study, the damage was also limited to the outer epithelial layer. Functionally, we observed no hearing losses in subjects treated with either RWM digestion in the previous study or with USMB in the present study.
These results suggest that the application of USMB to the RWM is a safe method for cochlear gene transfection. Moreover, the USMB method is more controllable than our previously reported digestion method.
Other than RWM approach, cochleostomy and canalostomy have been evaluated for cochlear gene transduction by AAV. In the best scenario, cochleostomy can achieve a safe cochlear gene transfection mediated AAV in large animal model like guinea pig with less than 10 dB threshold shift. However, such a good hearing reservation is difficult to be reached in adult mice in cochleostomy. 15 AAV injection via canalostomy can effectively infect cells of cochleae and vestibular organs in neonatal mice without significant hearing loss. 11,14 However, the great recovery ability of neonatal mouse cochlea after intense surgical injury is not likely duplicable in adult mice. More importantly, both canalostomy and cochleostomy are less likely to be translated in human cochlear gene therapy, especially for the protection purpose in which hearing reservation is critical. Unlike rodents, human inner ears are deeply embedded in the temporal bone. Both cochleostomy and canalostomy require intense surgery, and likely risky in causing hearing loss. In humans, RWM approach is the only one that has been utilized for inner ear drug delivery (e.g., in the treatment of sudden sensorineural hearing loss and Meniere's disease 67,68 ).
| Limitations and future improvements
In this study, we compared the transfection efficiencies between the USMB-RWM approach and the cochlear injection of virus via cochleostomy. A slight hearing loss was observed in the subjects after cochleostomy but not after USMB. However, the transfection rate was significantly lower in the USMB group than in the cochleostomy group ( Figure 9). While our focus is on cochlear gene transfection in this study, RWM approach is likely useful for gene transfection in the vestibular system, considering the fact that RWM is closer to vestibule than cochlea. We intend to evaluate this potential in our further study especially after the delivery system is optimized. Several possibilities for further improvements are under consideration. The first possibility involves the use of a smaller probe. Although the 1.5 mm probe allowed us to place the probe on the ring of the round window niche, the surgery required to open the area for access remains quite invasive. The reduction of the probe size to 1 mm would enable the surgery to be performed more easily, and the probe could be inserted into the niche along with a smaller amount of MB solution. However, the difficulty of manufacturing these devices increases as the diameter decreases. The second improvement involves the packaging of the rAAVs into MBs or the coadministration of rAAVs with the MB solution. A reduced probe size would make this approach possible. The third improvement involves the use of recently reported novel AAVs that have a higher transfection rate. [11][12][13][14]16,69,70 We believe that with these improvements, USMB-mediated cochlear gene transfection via the RWM would become a useful tool that could be translated into human clinical applications.
|
v3-fos-license
|
2018-12-30T11:52:38.949Z
|
2017-11-12T00:00:00.000
|
59335730
|
{
"extfieldsofstudy": [
"Geography"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/amete/2017/8917310.pdf",
"pdf_hash": "af996a422788bf12fb226ae26197f5197a7f07ad",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:785",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "af996a422788bf12fb226ae26197f5197a7f07ad",
"year": 2017
}
|
pes2o/s2orc
|
Seasonal Variations of the Urban Thermal Environment Effect in a Tropical Coastal City
1State Key Laboratory of Desert and Oasis Ecology, Xinjiang Institute of Ecology and Geography, Chinese Academy of Sciences, Urumqi 830011, China 2Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China 3University of Chinese Academy of Sciences, Beijing 100049, China 4Faculty of Environmental Sciences, University of Lay Adventists of Kigali (UNILAK), P.O. Box 6392, Kigali, Rwanda 5Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing 100094, China
Introduction
Urban heat island (UHI) effect refers to the phenomenon by which urban areas' temperatures become higher than those of the surrounding rural areas, usually as a result of rapid urbanization [1][2][3].Based on the UHI concept, urban thermal environment is a concept that was put forward by experts and scholars in meteorology and environmental research field in recent years [4,5], referring to the heat-related physical environment that can affect the urban atmosphere, energy consumption, body's sense of well-being, health status, and human survival and development [6].At present, the deterioration of urban thermal environment has become one of the most significant features of global urban climate change and has great negative impacts on urban air quality improvement, CO 2 , and haze control, as well as plant health and growth [7,8].The evolution of urban thermal environment has a close relationship with human society and economic activities.The acceleration of the urbanization process and the expansion of asphalt, metal, cement, and other impervious surfaces at the expense of natural surfaces have, in conjunction with population growth, resulted in urban climate change, which is even more dramatic than global change [9,10].
In general, UHI can be used to reflect and embody the urban thermal environment [11].Oke (1995) and Yow (2007) demonstrated that UHI could be divided into three types: Canopy Layer Heat Island (CLHI), Boundary Layer Heat Island (BLHI), and Surface Urban Heat Island (SUHI) [12,13] whereby the first two types refer to atmospheric urban heat island.The Urban Canopy Layer (UCL) roughly refers to the layer from the ground surface to the urban elements' height (mainly tree or building) while the Urban Boundary Layer (UBL) refers to the layer situated directly above the UCL and in the layer of the atmosphere [14,15].
Advances in Meteorology
These two distinct layers' heat island studies primarily utilize air temperature data or meteorological datasets with ground observation and numerical simulation as the main methodological approaches involved [16,17].SUHI differs from atmospheric urban heat island and is usually measured by surface temperature (LST) data retrieved from thermal infrared airborne and satellite sensors.The SUHI is substantially affected by urbanization and has been increasingly used in recent studies because it is closely related to human health and directly linked to land surface features [18,19].
Remote sensing technology has the advantages of short acquisition period, wide coverage, low user cost, and rapid and accurate monitoring of urban surface temperatures [20,21].It has presently become the main technical means and a powerful research tool for experts and scholars to analyze the trends and dynamics of UHI, owing to the fact that it can give a spatially continuous view of the SUHI and effectively depict the patterns of the thermal environment over large urban areas compared to that from air temperature measured at standard meteorological stations or groundbased air temperature measurements in cities [22][23][24].
Early efforts started in 1972 when, for the first time, Rao used thermal infrared remote sensing technology to study the surface temperature distribution model and SUHI in coastal cities of the Atlantic Ocean in the United States [25].Since then, polar orbiting satellites such as AVHRR, MODIS, Landsat TM/ETM, and geostationary orbiting satellites (e.g., GOES, MSG) have been the most commonly used remote sensing data for SUHI research, with advantages of measuring urban thermal environments and offering spatially explicit coverage at multiple scales, ranging from cities to the whole world [26,27].Nevertheless, the design of these satellite sensors has been regarded as a bottleneck limiting their application for SUHI study, because remote sensing imagery has a high temporal resolution but a relatively low spatial resolution [28].For instance, SEVIRI onboard MSG and the GOES satellites can collect data every 5 and 30 minutes, respectively, which is fairly adequate to monitor the diurnal development of LST [29] but are characterized by very low spatial resolutions of about 3000 m.In order to describe the UHI characteristics, downscaling methods have often been employed to enhance the spatial resolution, but the choice of appropriate method still remains a great challenge [30].Likewise, AVHRR and MODIS images have relatively higher temporal resolutions but their low spatial resolution constrains their use.For instance, AVHRR has an overpass with two images at the equator every day, but their low spatial resolution (1.1 km) is too coarse to study variations in SUHI with precision [31].Similarly, MODIS acquires data at a high temporal resolution, but it is still problematic to accurately describe the spatial changes in SUHI due to the relatively low spatial resolution (1 km) [32,33].Numerous SUHI studies based on Landsat TM/ETM+ data have been conducted but the spatial resolution of Landsat TM/ETM+ data is still relatively high (120 m/60 m) while its temporal resolution is low (16 days), which makes it extremely difficult to obtain the desired image for analysis and monitoring SUHI dynamics in real time [34][35][36].Therefore, it is imperative to seek an alternative remote sensing dataset with a rather satisfactory spatiotemporal resolution.HJ-1B is a satellite successfully launched by China on September 6, 2008, whose main function lies in environmental disaster monitoring and forecasting.The spatial resolution of the thermal infrared band IRS4 is 300 m, which can meet the requirements of the SUHI monitoring on the spatial scale.The revisit time of the satellite is 4 days which is enough for it to perform global coverage and the possibility of data acquisition at a much higher frequency than that of Landsat instruments.Over the recent years, Ouyang et al. [37] used HJ-1B remote sensing imagery to retrieve LST over the Heihe river basin in China, while Wu et al. [28] used HJ-1B thermal infrared bands to assess the effects of land use spatial structure on UHI in Wuhan, China.Notwithstanding these studies however, none focused on the seasonal variations of SUHI, especially in areas with remarkably rapid urbanization processes such as Shenzhen, one of the most rapidly urbanizing cities in China.
The objective of this study is twofold.Firstly, it examines the spatial characteristics of the SUHI change in Shenzhen, as quantified by LST and urban heat island intensity (UHII), which are two important indicators to evaluate the severity of SUHI in different seasons.Secondly, it analyzes and discusses the seasonal variations of SUHI in Shenzhen during 2015.Remote sensing images were used to retrieve the seasonal LST, which was further classified into seven levels to show the SUHI intensity.Spatial analysis including Moran's and gravity center model were conducted to derive the spatial patterns and dynamics of seasonal SUHI change on local scales.Based on the results and discussion, this paper may provide useful information for urban planners to create strategies to mitigate SUHI effects in Shenzhen.
Study Area.
The city of Shenzhen is located between 113 ∘ 46 E∼114 ∘ 37 E and 22 ∘ 27 N∼22 ∘ 52 N in the southern part of Guangdong Province of China, where it shares borders with the Hong Kong Special Administrative Region (SAR) to the south; Dongguan, Guangzhou, and Huizhou cities to the north; Pearl River Estuary to the west; and Daya Bay and Dapeng Bay to the east (Figure 1).
Shenzhen has a tropical maritime climate consisting of long summers and short winters.The annual average temperature is 23.0 ∘ C, the average low temperature is 15.4 ∘ C in January, and the average maximum temperature is 28.9 ∘ C in July [38].The mean annual total precipitation is 1933.3mm, with the rainy season extending from April to September.
Shenzhen city, as a prototype of rapid urbanization since China's reform and opening up policy in 1978, is one of the largest cities in China with the fastest development.Shenzhen has emerged from a border town into an international megacity comprising ten administrative districts.In 2015, the total area of the city was 1996 sq.km while the residing population was estimated at 10.77 million, and the population density and population growth were 604 people/km 2 and 1.4% in 2015, respectively.The gross domestic product (GDP) was 1750.299 billion yuan.Therefore, Shenzhen can be considered as an ideal area for studying the spatial temporal variability of UHI effect within the context of rapid urbanization.2.2.Data Source and Preprocessing.HJ-1B is a small optical satellite that was launched in September 6, 2008.It is mainly used for monitoring the environment and disasters, owing to the Chinese government's will to enhance disaster reduction and risk control capabilities and improve environmental protection [39].HJ-1B satellite has one infrared sensor (IRS) and two charge-coupled device (CCD) sensors.The IRS can provide global coverage every 4 days, while the CCD sensor can cover the globe in 2 days [40].The CCD sensors have four bands: band 1 is blue, and band 2 to band 4 are green, red, and infrared bands, respectively.The IRS has four bands including band 5 (near infrared band, NIR), band 6 (shortwave infrared band, SWIR), band 7 (middle-wave infrared band, MWIR), and band 8 (longwave band, LWIR).The main parameters of the HJ-1B satellite include the 30 m spatial resolution CCD data and 300 m spatial resolution thermal infrared data [41].
The detailed sensor parameters are shown in Table 1.
In 2015, using the 5-Day Running Mean Temperature method based on the meteorological data (1980-2010) of Shenzhen city, the Meteorological Bureau of Shenzhen Municipality announced the succession of seasons in the order of winter (January 13th to February 6th), spring (February 6th to April 21st), summer (April 21st to November 3rd), and the fall (November 3rd to January 13th) [42].Since then, many studies have shown that the method used was appropriate in relation to the meteorological characteristics of Shenzhen [43].As it often occurs in tropical areas, climatic conditions of the tropical coastal cities like Shenzhen are less than ideal for seasonal SUHI study because the images are often occluded by clouds [44,45].Therefore, despite nearly 10 years of HJ-1B imagery only four high quality HJ-1B CCD and IRS images were deemed usable and were selected for use in this study, respective of seasons.These images were acquired in winter, spring, summer, and autumn of 2015, precisely on January 16, April 14, October 18, and December 18, respectively.These images were acquired at approximately 11:05 am local time under steady atmospheric conditions favorable for SUHI studies such as windless weather and sunny conditions.In order to further ensure their suitability for seasonal analysis and prove that the date of the remote sensing images acquisition is not affected by extreme temperature [46], we compared the daily maximum and minimum air temperature derived from the acquired images with the average maximum and minimum air temperature data provided by the Meteorological Bureau of Shenzhen Municipality as illustrated in Table 2.It can be observed that, by comparison, the daily maximum and minimum air temperature data are within the range of the seasonal average maximum and minimum temperatures, which confers some degree of reliability to the remote sensing data used in this study.
The four CCD images and four IRS images were obtained from the China Center for Resources Satellite Data and
Advances in Meteorology
where CCD and IRS4 are the radiance (W⋅m −2 ⋅sr −1 ⋅m −1 ), is the gain coefficient, is the offset, and DN is the image pixel digital values.To convert the DN values to radiance, we used the HJ-1B CCD and IRS Camera's absolute radiometric calibration coefficients which were obtained from the China Center for Resources Satellite Data and Application (Table 3).
The DN values for CCD camera's bands 1-4 were converted to radiance values using (1), whereas the DN values for IRS's band 4 were converted to radiance values using (2).
(II) Atmospheric Correction.In order to eliminate the influence of atmospheric and light factors on ground reflections, it is necessary to correct the remotely sensed image after calibration.FLAASH is an atmospheric correction module developed by the Spectral Sciences Institute.It directly transfers the radiation transmission calculation method from Modtran5 atmospheric radiation transmission model and accurately retrieves the surface reflectivity from the remote sensing image.FLAASH is widely used in surface temperature research, and its atmospheric correction wavelengths range from 0.4 m to 3 m.Hence, ENVI-FLAASH atmospheric correction module was used to calibrate the HJ-1B CCD image after radiometric calibration.
(III) Geometric Correction.Landsat-8 images acquired on October 5 and 15, 2014, were chosen as the reference image, and the HJ-1B images were corrected using geometric calibration with a correction error of 0.5 pixels or less.
Retrieval of LST.
At present, various LST retrieval methods such as mono-window algorithm [47], split-window algorithm [48], temperature/emissivity separation [49], and single-channel methods [50] have been theoretically developed.Although all of these methods can calculate LST from thermal remote sensing images and provide good results, it has been proven that, by comparison, the mono-window algorithm constitutes a simple and highly effective method for the analysis of SUHI effect [51].Sobrino et al. (2004) showed that the mono-window algorithm seems to be more applicable than the single-channel method with root mean square deviation of 0.9 K [52].Besides, the mono-window algorithm is believed to have a comparative advantage over single-channel algorithms in a sense that it can yield better results in regions with humid atmosphere [53].Previous researchers have applied the mono-window algorithm for the analysis of SUHI in humid areas like Hong Kong, Casablanca, and Pearl River Delta Region in South China [34,54,55].Thus, the mono-window algorithm has been utilized to retrieve the LST of Shenzhen from HJ-1B data in this study.The equation for LST retrieval from HJ-1B IRS4 data is as follows: where is land surface temperature in Kelvin, 0 is satellite brightness temperature of IRS4 in Kelvin, and is the mean atmospheric temperature in Kelvin. is ground emissivity while is atmospheric transmittance. and are the coefficients that can be approximated according to the following equation: where is a temperature variable.For IRS4, has a higher relationship with 0 .Since the possible temperature range is 0 ∘ C to 55 ∘ C in most cases, the coefficients in (3) and (4) were approximated as = −69.158, = 0.4684 with a correlation coefficient 2 = 0.9997 according to IRS4 channel response function published by the China Center for Resources Satellite Data and Application. 0 is calculated according to the following equation: where is the radiance that can be calculated by (2). 1 and 2 are prelaunch calibration constants.For IRS4, which was Advances in Meteorology used in this study, 1 = 589.33W⋅m −2 ⋅sr −1 ⋅m −1 and 2 = 1249.91K.
There are many ways to calculate the ground emissivity.In this paper, the methods of Normalized Difference Vegetation Index (NDVI) threshold and fractional vegetation cover (FVC) are combined to estimate the for each pixel.NDVI and FVC are calculated according to the following equation: where band 3 and band 4 are reflectance values in the red region and NIR of HJ-1B CCD images. is fractional vegetation cover.NDVI is the minimum NDVI value of pure vegetation pixel, and NDVI is the maximum NDVI value of pure soil pixel.
In order to preadjust calculation, typical substance emissivity such as vegetation, soil, water, and impervious surface is generally included in the calculation formula.The vegetation emissivity values are located between 0.98 and 0.99, and water emissivity is usually considered as 0.995 based on previous research findings [56].The impervious surface emissivity is selected between 0.960 and 0.980, and soil emissivity is usually selected between 0.970 and 0.980 [57].For the pure vegetation pixel of the HJ-1B image, we selected the vegetation emissivity = 0.986, = 1 when NDVI ≥ NDVI .For the pure soil pixel, we selected the soil emissivity = 0.972, = 0 when NDVI ≤ NDVI .For the pure water pixel, we selected the water emissivity = 0.995, while for the impervious surface emissivity = 0.968.Vegetation, soil, water, and impervious surface emissivity values were based upon the local conditions and the substance's spectrum curve.For the mixed pixel of soil and vegetation, can be calculated according to (7).For the mixed pixel of impervious surface area and vegetation, can be calculated according to (8): where is error in emissivity value.It is the mean weighted value that can be calculated by the mean emissivity value of different surface types.Weighted value = 0.04 (approximation) was adopted in this paper. , , and are temperature ratios of vegetation, bare soil, and impervious surface area, respectively.Qin et al. ( 2004) presented an elaborate determination of the ratio for accurate LST retrieval from Landsat TM6 data and gave the following accurate estimation of the ratio from the fractional vegetation of pixel [58]: Duan et al. established an equation for the relationship between atmospheric transmittance and water vapor content for IRS4 as follows [59]: where and are constants taken as 0.02 and 0.651.2 and 19 are reflectance values in band 2 and band 19 of MODIS, due to the fact that bands 2 and 19 are sensitive to atmospheric water vapor and that the satellite overpass time for MODIS and HJ-1B is very close.The last parameter in (3) is the mean surface temperature .Duan proposed four models of different atmospheric profiles for calculating of IRS4 [51].Since Shenzhen city lies in the tropics, the tropic model was adopted to calculate : The tropic model has been successfully applied for LST retrievals by previous researchers [53].All analysis was completed using the software ENVI 5.3.After the completion of the above steps, the LST images were obtained as shown in Figure 2.
Comparison with MODIS LST Product. Multiple daily
LST products have been used in urban thermal environment studies worldwide.The data are generated by the science team of the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the NASA Terra and Aqua Earth Observation System satellites.In this paper, MODIS daily temperature products (MOD11A1) provided by NASA were employed to validate the results of LST retrieved from HJ-1B.MOD11A1 data consist of level 3 daily LST products acquired at 1 km resolution which have been extensively used in SUHI monitoring studies [60,61].The split-window algorithm is used to retrieve LST by applying multiple bands from the 31 (10.78 m-11.28m) and 32 band (10.77 m-12.27m) of MODIS.The accuracy of the MOD11A1 has been verified and the margin error is 1.0 K [62].Previous studies have proven MOD11A1 is the useful tool for experts and scholars to validate the accuracy of the LST retrieved from HJ-1B [63,64].
The MOD11A1 data, whose overpass time is close to that of HJ-1B satellite on the same day, were geometrically corrected using NASA's MRT tool.The MODIS LST ( ∘ C) for Shenzhen can be calculated as follows [65]: where is the LST value and DN is the pixel gray value.
The radiation scaling ratio is 0.02, as indicated in MOD11A1 products' header files.The radiation scaling distance is 0 and 273.15 is the difference value between Kelvin temperature (K) and Celsius degree ( ∘ C).
In order to match the images, MOD11A1 data were resampled to pixels of 300 m × 300 m for comparison with LST retrieved from HJ-1B for the same study period.Since the temperature distribution over the lake surface is homogeneous, Xiangmi Lake, Shenzhen Reservoir, Yantian Reservoir, Meilin Reservoir, and Xili Reservoir were selected in study area as the ground target for verification.Then 100 sampling points over lake surfaces were randomly selected for accuracy assessment, and the results are given in Table 4.By comparison, it can be concluded that the inversion accuracy of LST based on HJ-1B thermal infrared data is high, and the correlation coefficients were 0.873, 0.859, 0.877, and 0.815 in winter, spring, summer, and autumn, respectively, and had passed test using a significance level of 0.01.Moreover, the error indicators of RMSE and MAE are less than 3 and 2, respectively.All of this indicated that LST retrievals from HJ-1B can be trusted in this area.
Determination of Urban Heat Island Intensity.
Urban heat island intensity (UHII) is an important indicator to evaluate the severity of SUHI [65].In this study, UHII is defined as the difference between the LST of each pixel and the mean surface temperature in HJ-1B images [66].The approach tends to differ from the other regional or global UHII analyses that took "rural" area (a certain distance away from urban) as reference locations [67,68], since Shenzhen is the first city in China with no rural administrative system.There is no rural social system and the urbanization rate amounts to 100%.Since 2003, the government passed a series of regulations to urbanize all rural areas of Shenzhen [69] and took over all local rural residents' owned land after attributing an urban resident status to all rural residents of that time.As most of rural areas were surrounded by the city's built-up centers, these changes not only turned all traditional villages into "urban villages," but also marked the end of the urban-rural system division in Shenzhen.It can therefore be stated that there is not a reasonably large "rural area" to be considered as a reference and it is practically impossible to demarcate the boundaries between the city and the rural area.
The UHII calculation method adopted here can reduce the uncertainties associated with site-specific rural conditions across Shenzhen city (e.g., topography, the presence of water body, and land use).It can be calculated as follows: where UHII and are SUHI intensity and LST of pixels , in HJ-1B image, respectively. is mean surface temperature in the image (Figure 3).
Classification of LST Level.
The method of the density segmentation was used to classify the urban surface thermal environment after LSTs were normalized, which can reduce atmospheric and calibration correction errors [31,70].
Firstly, LST values were normalized between 0 and 1 during different seasons in 2015.The normalized value was calculated using where is the normalized LST value of the pixel , is the LST of the pixel , and max and min are the maximum and minimum LST over the entire study area.
To reflect the spatiotemporal distribution of LST directly, the normalized LST value was further divided into seven levels: very high, high, sub-high, medium, sub-medium, low, and very low.The classification criteria of LST level are listed in Table 5. [71].It mainly reflects the relevance of the same variable in different spatial positions.The global autocorrelation analysis has been widely used to detect the
Spatial Autocorrelation Analysis. Spatial autocorrelation is an indicator which measures the aggregation degree of the spatial attribute value
where is Moran's index value, is the attribute of the pixel of in UHII image, is the mean UHII change and can be calculated using (16), is the total number of observations, and is the spatial weight matrix and can represent the relationship between data sites and in spatial attributes of UHII.
Gravity Center Model.
Based on the change trend analysis of UHII, centroid moment analysis of UHII was conducted to reflect the dynamics of the urban thermal environment patterns in Shenzhen for different seasons.That is to say, the center of gravity is an important indicator which can describe the spatial distribution and transition of UHII.It not only shows the tendency of spatial distribution, but also reflects the "high-density" parts and overall heterogeneity [73].In this paper, the gravity center model was used to reflect the overall transfer trajectory of the distribution of UHII during different seasons in Shenzhen.It has a great potential to reveal the evolution of seasonal SUHI in the study area.For UHII geographical objects, the coordinates of gravity center were computed using the following equation: where and are the centroid coordinates of UHII calculated by area-weighted average, and are the centroid coordinates of pixels with UHII, is the area of pixels , and is the number of all pixels with the same UHII value.
Spatiotemporal Distribution Dynamics of LST. LSTs in
Shenzhen during different seasons of the year 2015 were quantitatively retrieved from HJ-1B data, and their spatiotemporal distribution was further analyzed (Figure 2).In order to reflect the spatial variations of LST during four seasons, we used the detailed land use reference map derived from the high resolution (17 m) Google Earth image dated on April 14, 2015.The supervised maximum likelihood classification method was used to obtain the land use reference map composed of seven land use/land-cover types, notably impervious surfaces area (ISA), forest land, public gardens, plough land, grass land, unused land, and water bodies as shown in Figure 4. Spatially, LSTs showed a decreasing trend from west to east, and the high temperature regions were concentrated in the Western Industrial Cluster, Qianhai and Futian-Luohu Urban Municipal Center, Central Urban Cluster, and Eastern Industrial Cluster.Several reasons may be attributed to occurrence of higher temperatures in the abovementioned regions.Firstly, the Western Industrial Cluster is close to the cities of Guangzhou and Dongguan, having a large number of ports, airports, factories, universities, and residential areas, and has become the most important base for high-tech and manufacturing industries in the Pearl River Delta region.Secondly, Qianhai and Futian-Luohu Urban Municipal Center is a gigantic business center, established on the development of corridors of Guangzhou and Hong Kong and serves as an international production service center.There are many urban villages, old residential and industrial areas.Finally, the Central Urban Cluster has become the major site for real estate development due to its location advantages.Eastern Industrial Cluster has also had rapid industrial development underpinned by government efforts to promote industrial expansion in the area.Besides, SUHI effect is not obvious in Dapeng Peninsula (southeastern Shenzhen) whose terrain is largely mountainous along the coast.This region is also adorned by a significant number of water bodies and green areas that substantially attenuate thermal radiations.Areas with the low LSTs also include Xili Street in the southwest of the city and Longgang Center Cluster in the central part of the city owing to the presence of numerous water reservoirs such as lakes and a hilly topography.The highest surface temperatures were found in the important transportation hubs, such as Baoan International Airport in the west, Shekou Port in the southwest, and Yantian Port in the south.
The seasonal trends of LST are shown in Figure 2; in winter, LST values ranged from 13 to 31.89 ∘ C with higher surface temperatures mainly concentrated in business and industrial districts of Nanshan, Baoan, Longgang, and Guangming.SUHI effect was strikingly more pronounced in spring and summer where the difference between the peak value and the lowest value would go as high as 27 4).The highest temperatures were in the Qianhai and Futian-Luohu Urban Municipal Center where urbanization is at its peak together with the political, business, high-end services and residential functions.Generally, the LST values were higher during summer and spring compared with winter and autumn.In all seasons, the LST was highest in transportation hubs followed by commercial areas, industrial areas, and residential areas, while the lowest occurred in green areas and water body areas with LST of 11.78 ∘ C in autumn and 19.25 ∘ C in summer, respectively.The reasons for seasonal variations in LSTs values within the study area are very clear.All higher LST values were located in commercial activities' centers and principal residential zones, while all lower LST values areas consisted of vegetation and water covers [74].
Seasonal Variation of UHII and Its Profiles Characteristics.
Figure 3 displays the spatial distribution of UHII in Shenzhen during spring through winter in 2015.It can be observed that UHII values are higher in spring and summer and lower in autumn and winter.The red area with higher UHII intensity is vastly concentrated in spring and summer, while a lower UHII intensity and more even distribution have been noted in autumn and winter.The UHI has distinctive characteristics according to regions in conformity with the trajectory of urban expansion.No. 1 to No. 5 path profiles, there exist numerous "peaks," "basins," and "plateaus," indicating the heterogeneous nature of UHII over the area and also reflecting the influence of different types of land use and occupation on the UHII along the profile paths.Many factors such as the occurrence of mountains, other green spaces, water bodies, high population densities, buildings, and the administrative subdivision of the city's functional districts may exert an influence on the spatial distribution of UHII.The seasonal variation of UHII becomes visible when comparing the different sample paths of UHII profiles in Figure 5.It appears that UHII value varies according to seasons.The highest value dimension belongs to summer, followed by spring and autumn, while the lowest value dimension is found in winter.The effect of UHII becomes more prominent when the season changes from autumn to spring.The No. 1 path profiles have a higher value of UHII than other path profiles.This indicates that No. 1 path profiles cross more built-up or impervious areas that possess higher thermal signatures than the other path profiles.Indeed, the No. 1 path profiles pass over the area with the highest impervious surface coverage such as airports, ports, and industries.The relatively low value of UHII was found in the No. 5 path profiles, most likely because this path crosses more land-cover types, encompassing water bodies and greenspaces.It can be noted in Figure 5 that, in almost all paths in winter, the UHII values were lower than in other seasons, while, in summer, most UHIIs were at their highest.The values of UHII in summer and spring are relatively close, while the UHII values in winter and autumn are relatively close.There is a contrast, therefore, especially in the winter and summer seasons.Over some water bodies particularly, the UHII value goes from zero to negative.This profile is an important ecological belt established by Shenzhen municipality authorities to protect the environment, containing many mountains, water bodies, parks, beaches, and green tourist attractions.Impervious surfaces and built-up areas occupy a nonsignificant portion of the land.
Spatiotemporal Distribution of the LST Levels.
Based on the significance of LST level classification (Figure 6), the spatiotemporal diversity of LST in Shenzhen can be disentangled through anomaly analysis to determine "high temperature zones" (including very high, high, and subhigh), "medium temperature zones," and "low temperature zones" (including sub-medium, low, and very low).High temperature zones occupy the largest proportion in spring and summer, and the medium temperature zones occupy the largest proportion in autumn and winter.
Analysis of seasonal temperature has revealed that, in autumn and winter, "medium temperature zones" occupy a large proportion, followed by "low temperature zones" whereas "high temperature zones" account for the smallest proportion.By contrast, in spring and summer, "low temperature zones" occupy a large proportion, followed by "high temperature zones" and "medium temperature zones."The "high temperature zones" in spring and summer were centered in the Western Industrial Cluster, indicating a high UHII.The southeast of Shenzhen is covered by mountains and the vegetation is very dense during spring and summer, which may explain the relatively low UHII.During winter and autumn, "high temperature zones" are few and extremely scattered.However, "very high temperature zones" were detected in all seasons and were concentrated over the Baoan International Airport, Shekou Port, and Yantian Port, underscoring the strong impact of anthropogenic activities on urban thermal environment.
3.4.Spatial Autocorrelation Analysis.Moran's index was used to perform spatial autocorrelation analysis (Figure 7).Moran's index value ranges between −1 and 1.When the value is larger than 0, the autocorrelation is positive.When the value is lower than 0, there is a negative correlation in the spatial patterns.When the value is 0, it means that there is no spatial correlation.In this study, Moran's index values in different seasons are all above 0.5, indicating that there was a positive spatial autocorrelation for UHII in different seasons in this study.Higher values in summer and spring, followed by winter and autumn, suggest that the aggregated change between high and low temperature zones was more apparent in summer and spring.
Gravity Center of UHII Shifts during Different Seasons.
The center of gravity of UHII shifts was concentrated in Longhua District, Baoan District, and Nanshan District throughout the four seasons (Figure 8).This is because Longhua District belongs to the Central Urban Cluster, Baoan District belongs to the Western Industrial Cluster, and Nanshan District shelters the Qianhai Municipal Center.In winter, UHII gravity center is primarily located in Longgang District due to the presence of large numbers of commercial, residential, and public buildings.In spring and summer, UHII gravity center is transferred to the south and the center of Baoan District, indicating that UHII gravity center is controlled by the presence of intensive manufacturing factories and vibrant economic activities such as airports where population and traffic volume are high.In autumn, UHII gravity center shifts towards the Qianhai Municipal Center which is both an international production service center and a grand business center.
Analysis of Seasonal Variations of Urban Thermal Environment.
This paper shows that the analysis of seasonal variations of SUHI is crucial to describe the urban thermal environment in Shenzhen.Selected as the study area, Shenzhen is appropriate for urban thermal environment studies owing to its rapid urbanization characteristics.
The spatial variation characteristics highlighted in this paper are consistent with previous studies that used thermal infrared data, such as Landsat TM/ETM.In 2013, for instance, Xie et al. [75] derived land surface temperature from Landsat TM to investigate the relationship between landscape patterns and LST in Shenzhen and showed that SUHI was located in transportation centers and industrial hubs (especially in Baoan Airport, Qianhai and Yantian Port, and Songgang and Shajing).They argued that ISA can contribute to the increase of LST and that vegetated areas could contribute to the decrease of LST.Xie et al. [6] also assessed the spatial patterns of the thermal environment in Shenzhen, selecting four profiles for LST distribution analysis, and showed that the peak values in profiles were located in the CBD, industrial land, and transportation centers while low SUHI were located in profiles corresponding to rivers, forests, and lakes.
Several factors may influence the seasonal variation in the SUHI of Shenzhen.Firstly, the differences in surface wind may possibly result in differences in SUHI within four seasons.Lu et al. (2009) found that the urbanization of Shenzhen can have a significant impact on sea breeze [76] and that enhanced wind speed could induce low temperatures.According to the Meteorological Bureau of Shenzhen Municipality, in the published "Shenzhen Climate Report" [77], the average annual wind speed is 2.7 m/s, and the wind speed value is higher in winter and autumn (monthly mean wind speed within a range of 2.8-3.0 m/s) and lower in summer (2.1-2.2 m/s).This is consistent with this study's findings that the values of UHII were higher in spring and summer and lower in autumn and winter.Secondly, the seasonal changes of SUHI were positively and significantly correlated with precipitation [78].In Shenzhen, summer mean precipitation was 1562.5 mm, followed by spring (275.4 mm), autumn (66.0 mm), and winter (27.7 mm), while the rainfall amount from April to September made up to 84.5% of the estimated total annual rainfall.Finally, the SUHI was strong in summer and spring whereas it was weak in winter and autumn, possibly due to stronger solar radiation absorbed by urban areas in summer and spring.In Shenzhen, the monthly mean solar radiation value was beyond 400 MJ⋅m −2 in summer and spring and decreased to 300-400 MJ⋅m −2 in winter and autumn [77].Also, stronger human activities in summer and spring may lead to stronger SUHI.It is acknowledged that the seasonal variations of SUHI were related to anthropogenic heat releases [79].In Shenzhen, human activities such as vehicles, air conditioners, power plants, and other heat sources release a lot of heat, especially in the anthropogenic heat flux for building cooling owing to its tropical maritime climate characterized by long summers and short winters [80].
The results of this study are also congruent with the findings of Qiao et al. [73] who, while studying the influences of urban expansion on urban heat island in Beijing during 1989-2010, found that the UHI gravity center transfer was highly consistent with urbanization patterns and dynamism, although irregular transfers of UHI were observed in some zones.At present, the reasons for shifts in UHII gravity center are still not yet clearly known.This is because UHII center transfers can be affected by a huge number of factors such as vegetation activity [81][82][83], albedo [84][85][86], built-up intensity [87], anthropogenic heat emissions [3,88], and city size and topography [27,89].However, in this study, it has been revealed that seasonal changes have the potential to greatly influence the distribution and transfer of UHII gravity centers.
Implementation for Urban
Planning.SUHI is an important aspect to consider for achieving urban sustainable development and is one of the key levers to solving environmental problems.In light of the above results, three possible mitigating measures can be adopted to counteract SUHI effects in Shenzhen.Firstly, we recommend the reduction of anthropogenic heat release through human activities.A supporting example to that argument is the study conducted by Kikegawa et al. (2003) who estimated that the near-ground temperature decreased by more than 1 ∘ C by cutting off all air-conditioning waste heat of buildings (discharge of waste heat into media other than the atmosphere) in a central business district in Tokyo [90].Mirzaei and Haghighat (2010) reported that anthropogenic heat release was the main reason for the SUHI in metropolitan areas [91].In Shenzhen, as the metropolitan city of China, the increase of energy consumption and huge demand for summer electricity caused SUHI effect to become more and more severe, especially in CBD, industrial areas, and residential areas.Therefore, sources of heat-waste from air-conditioning of buildings should be minimized and anthropogenic heat release should be reasonably controlled to the maximum.Secondly, better roof designs should be adopted.Research has found that the roof surface of the building can be used to reduce the surface temperature of the urban areas since roof surfaces occupy 20-25% of the total urban surface [92].Several studies have proved that cool roof and green roof are the two main technologies for mitigating the UHI [93,94].Cool roofs are characterized by materials permitting high thermal emittance and high solar reflectance.These materials can reflect the incident solar radiation away from the building, which keeps their roof surface cooler compared to traditional materials.Green roofs, on the other hand, use the foliage of plants to cover the roof surface.The vegetation and soil can absorb solar radiation and take advantage of the additional thermal insulation [95].Shenzhen Metropolis is the largest city in China and its high-rise of buildings has become the mainstream of the official and residential developments.Cool and green roofs, as effective SUHI mitigation techniques, can also improve the thermal comfort of not-cooled buildings in Shenzhen [96].Finally, we underscore the great importance of planting trees and vegetation as it is the most widely applied strategy in mitigating SUHI as elaborated by several studies [97][98][99].
Conclusions
This paper investigated the seasonal variation of urban surface thermal environment in Shenzhen during spring to winter of 2015.Remote sensing techniques and GIS spatial analysis tools were used to retrieve LST from HJ-1B data.The indicator of UHII was established and the method of density segmentation was used to classify seven levels of LST values ranging from very high to very low.Spatial analysis including spatial autocorrelation analysis and gravity center moment analysis were carried out to look at the distribution dynamics of SUHI seasonal variation.
The results showed the following.(1) During the study period, the distribution of LSTs in Shenzhen showed a decreasing trend from the west to the east, and the high temperature regions were found in the Western Industrial Cluster, Qianhai and Futian-Luohu Urban Municipal Center, Central Urban Cluster, and Eastern Industrial Cluster.On the seasonal scale, there was a clear LST distribution pattern with the highest surface temperatures located in the important transportation hub in all seasons.It should be noted however that LST values' range and high temperature concentration areas vary on seasonal basis.(2) The spatiotemporal distribution of UHII is generally consistent with LST, with higher SUHI intensities in spring and summer.Five profiles were drawn to analyze the distribution of the UHII in different seasons.It was found that the No. 1 path profiles that are in the western developmental axis, whose main function is to develop modern service and manufacturing industry, have higher UHII than other path profiles while the No. 5 path profiles in the southern developmental belt whose purpose was to boost finance and tourism had relatively low UHII value.(3) The LSTs from four seasons were standardized and classified for characterizing the spatiotemporal distribution of the LST levels.Among all levels of temperature zones, the high temperature zones occupy the largest proportion in spring and summer, while the medium temperature zones occupy the largest proportion in autumn and winter.(4) From the UHII spatial distribution analysis, a spatially discontinuous pattern was observed in winter and autumn, while in summer and spring, there was a compact pattern of high temperature zones.Moran's values were higher in summer and spring, followed by winter and autumn.The center of gravity of UHII shifts converged in the Longhua, Baoan, and Nanshan Districts throughout the four seasons.Results indicated that seasonal variation could greatly affect the distribution and transfer of UHII gravity centers in Shenzhen.
Based on the findings highlighted above, it is believed that this study may provide urban planners in Shenzhen with useful information to monitor the urban thermal environment in different seasons and could serve as a reference tool in the effort to alleviate SUHI effect and improve the management of urban thermal environment in the quest to enhance the residents' well-being.Finally, given that the reasons for UHII center of gravity shifts in different seasons and the driving forces acting upon the urban thermal environment in Shenzhen cannot be exactly pinpointed in this study, further investigations are highly suggested.
Figure 1 :
Figure 1: Location of the study area.
Figure 2 :
Figure 2: Spatial distribution of LST in Shenzhen from spring through winter, 2015.
Figure 3 :
Figure 3: Spatial distribution of UHII in Shenzhen from spring through winter, 2015.
Figure 4 :
Figure 4: Detailed land use reference map in Shenzhen, 2015.
Based on "The 2010-2020 Comprehensive Plan of Shenzhen City," five profiles (Figure5(a)) were drawn to analyze the distribution of the UHII.The No. 1 path profile (Figure 5(b)) is in the western developmental axis whose main function is to develop modern service and manufacturing industry.It started with Qianhai Center Cluster and went alongside the Pearl River to connect the Baoan District and the Western Industrial Cluster, ultimately linking it with the city of Guangzhou.The No. 2 path profile (Figure 5(c)) is the central developmental axis whose main function is to develop integrated services, high-tech industries, and advanced manufacturing.The profile sets out at Futian-Luohu Urban Municipal Center and passes through Longhua and Guangming Center Cluster, before reaching the Dongguan city.The No. 3 path profile (Figure 5(d)) lies in the eastern developmental axis whose main function is to develop high-tech industries and advanced manufacturing.The profiles begin at the Futian-Luohu Urban Municipal Center and successively traverses the Buji and Longgang Center Cluster before ending in the Huizhou city.The No. 4 path profile (Figure 5(e)) is the northern developmental belt with a function of developing multifunctional industries.It started with Hong Kong Town and went alongside the railways and highways to connect Longhua, Longgang, and Pingshan Center Cluster, ultimately linking to Huizhou and Shantou cities.The No. 5 path profile (Figure 5(f)) is the southern developmental belt whose purpose was to boost finance and tourism due to its favorable emplacement along the development corridors of Guangzhou and Hong Kong.The profile sets off at Qianhai Center, passes through Futian-Luohu Urban Municipal Center, and arrives at the Daya Bay.
0 E 114 ∘ 15 0 E 114 ∘ 0 0 E 113 ∘ 45 0 E (a)Sketch map of UHII in sample path Profiles of UHII in sample path D Profiles of UHII in sample path E
Figure 5 :
Figure 5: Differences in UHII of different sample path profiles from spring through winter, 2015.
Figure 5
Figure5displays all sample paths of UHII profiles in Shenzhen from spring through winter in 2015.From the city's No. 1 to No. 5 path profiles, there exist numerous "peaks," "basins," and "plateaus," indicating the heterogeneous nature of UHII over the area and also reflecting the influence of different types of land use and occupation on the UHII along the profile paths.Many factors such as the occurrence of mountains, other green spaces, water bodies, high population densities, buildings, and the administrative subdivision of the city's functional districts may exert an influence on the spatial distribution of UHII.The seasonal variation of UHII becomes visible when comparing the different sample paths of UHII profiles in Figure5.It appears that UHII value varies according to seasons.The highest value dimension belongs to summer, followed by spring and autumn, while the lowest value dimension is found in winter.The effect of UHII becomes more prominent when the season changes from autumn to spring.The No. 1 path profiles have a higher value of UHII than other path profiles.This indicates that No. 1 path profiles cross more built-up or impervious areas that possess higher thermal signatures than the other path profiles.Indeed, the No. 1 path profiles pass over the area with the highest impervious surface coverage such as airports, ports, and industries.The relatively low value of UHII was found in the No. 5 path profiles, most likely because this path crosses more land-cover types, encompassing water bodies and greenspaces.It can be noted in Figure5that, in almost all paths in winter, the UHII values were lower than in other seasons, while, in summer, most UHIIs were at their highest.The values of UHII in summer and spring are relatively close, while the UHII values in winter and autumn are relatively close.There is a contrast, therefore, especially in the winter and summer seasons.Over some water bodies particularly, the UHII value goes from zero to negative.This profile is an important ecological belt established by Shenzhen municipality authorities to protect the environment, containing many mountains, water bodies, parks, beaches, and green tourist attractions.Impervious surfaces and built-up areas occupy a nonsignificant portion of the land.
Figure 6 :
Figure 6: Spatial distribution of LST level in Shenzhen during spring through winter of 2015.
Figure 8 :
Figure 8: Change path of UHII gravity in Shenzhen from spring through winter, 2015.
Table 1 :
Main parameters of HJ-1B satellite sensor.
Table 2 :
Comparison between daily air temperature and seasonal average air temperature.
Table 3 :
Absolute radiometric calibration coefficients of HJ-1B CCD and IRS camera.
Table 4 :
Comparisons of LST estimates retrieved from HJ-1B and MOD11A1.
Table 5 :
The classification criteria of LST level.
|
v3-fos-license
|
2017-06-09T14:16:12.142Z
|
2014-08-01T00:00:00.000
|
36686167
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.scielo.br/j/csp/a/fkSchnSmbLtwMBWGmY8vCfd/?format=pdf&lang=pt",
"pdf_hash": "b9c6774382e68100b1e86a1cf2e1a94a0da3c0ec",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:787",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "430355ecc553cd6cda88f386f1bfca94534178b7",
"year": 2014
}
|
pes2o/s2orc
|
Structure in Brazilian maternity hospitals : key characteristics for quality of obstetric and neonatal care
This study aimed to evaluate key characteristics of structure in a sample of maternity hospitals in Brazil. Structure was evaluated according to Ministry of Health criteria and included: geographic location, obstetric volume, presence of ICU, teaching activities, staff qualifications, and availability of equipment and medicines. The results showed differences in staff qualifications and availability of equipment in obstetric and neonatal care according to type of financing, region of the country, and degree of complexity. The North/Northeast and Central-West regions presented the most serious problems with structure. The public and mixed hospitals were better structured in the South/Southeast, reaching satisfactory levels on various items, similar or superior to the private hospitals. The current study contributes to the debate on quality of structure in Brazil’s hospital services and emphasizes the need to develop analytical studies considering process and results of obstetric and neonatal care. Maternity Hospitals; Structure of Services; Quality of Health Care Resumo Avaliar aspectos da estrutura de uma amostra de maternidades do Brasil. A estrutura foi avaliada tendo como referências as normas do Ministério da Saúde e englobou: localização geográfica, volume de partos, existência de UTI, atividade de ensino, qualificação de recursos humanos, disponibilidade de equipamentos e medicamentos. Os resultados evidenciam diferenças na qualificação e na disponibilidade de equipamentos e insumos dos serviços de atenção ao parto e nascimento segundo o tipo de financiamento, regiões do país e grau de complexidade. As regiões Norte/Nordeste e Centro-oeste apresentaram os maiores problemas. No Sul/Sudeste, os hospitais estavam melhores estruturados, atingindo proporções satisfatórias em vários dos aspectos estudados, próximas ou mesmo superiores ao patamar da rede privada. O presente estudo traz para o debate a qualidade da estrutura dos serviços hospitalares ofertados no país, e sublinha a necessidade de desenvolvimento de estudos analíticos que considerem o processo e os resultados da assistência. Maternidades; Estrutura dos Serviços; Qualidade da Assistência à Saúde http://dx.doi.org/10.1590/0102-311X00176913 S1 ARTIGO ARTICLE Bittencourt SDA et al. S2 Cad. Saúde Pública, Rio de Janeiro, 30 Sup:S1-S12, 2014 Introduction Recent decades have witnessed important strides in women’s healthcare as a result of collective efforts, with the important participation of social movements. The inclusion of maternal death as a serious human rights violation definitely helped to include the reduction in maternal mortality as one of the Millennium Development Goals 1. During this period, maternal mortality decreased significantly in Brazil, although the targeted reduction of 75% by 2015 (compared to the rate in 1990) will not be reached 2. Infant mortality has also decreased significantly, especially due to the post-neonatal component 2. Most of these maternal and neonatal deaths are known to be avoidable 3 and occur (mainly) in hospitals 4. The quality of obstetric services thus plays an important role in improving maternal and child health. However, quality assessment of obstetric services is not simple, since two patients are involved, sometimes with conflicting needs, and this balance requires complex and careful calculation 5. To measure quality of healthcare, Donabedian 6 proposed a theoretical framework based on structure, process, and outcomes, a triad that has been widely used in health services research 7. Structure refers to the relatively stable characteristics of services, including the availability of human and financial resources, equipment, and inputs, in addition to their organizational format. Structure alone does not determine quality of care, but its deficiencies can interfere in the results, as studies have shown for some time. Stilwell et al. 8 analyzed maternity hospitals in a region of England and demonstrated a relationship between number of pediatricians and perinatal mortality rate. Studies in Brazilian maternity hospitals showed deficiencies in the availability of equipment, surgical instruments, staff training, and presence of intensive care units (ICU) 9,10,11,12,13, thereby revealing gaps and potentialities in the health system for providing care during labor and delivery with appropriate case resolution. This study intends to provide a broad overview of structure issues in the sample of healthcare facilities participating in the survey Birth in Brazil 14.
Introduction
Recent decades have witnessed important strides in women's healthcare as a result of collective efforts, with the important participation of social movements.The inclusion of maternal death as a serious human rights violation definitely helped to include the reduction in maternal mortality as one of the Millennium Development Goals 1 .
During this period, maternal mortality decreased significantly in Brazil, although the targeted reduction of 75% by 2015 (compared to the rate in 1990) will not be reached 2 .Infant mortality has also decreased significantly, especially due to the post-neonatal component 2 .Most of these maternal and neonatal deaths are known to be avoidable 3 and occur (mainly) in hospitals 4 .
The quality of obstetric services thus plays an important role in improving maternal and child health.However, quality assessment of obstetric services is not simple, since two patients are involved, sometimes with conflicting needs, and this balance requires complex and careful calculation 5 .
To measure quality of healthcare, Donabedian 6 proposed a theoretical framework based on structure, process, and outcomes, a triad that has been widely used in health services research 7 .Structure refers to the relatively stable characteristics of services, including the availability of human and financial resources, equipment, and inputs, in addition to their organizational format.Structure alone does not determine quality of care, but its deficiencies can interfere in the results, as studies have shown for some time.Stilwell et al. 8 analyzed maternity hospitals in a region of England and demonstrated a relationship between number of pediatricians and perinatal mortality rate.
Studies in Brazilian maternity hospitals showed deficiencies in the availability of equipment, surgical instruments, staff training, and presence of intensive care units (ICU) 9,10,11,12,13 , thereby revealing gaps and potentialities in the health system for providing care during labor and delivery with appropriate case resolution.
This study intends to provide a broad overview of structure issues in the sample of healthcare facilities participating in the survey Birth in Brazil 14 .
Method
Birth in Brazil was a nationwide hospital-based cohort study on labor and birth 14 , the aim of which was to evaluate labor and childbirth con-ditions in Brazil from February 2011 to October 2012.
The study included healthcare facilities that had assisted more than 500 births in the year 2007 according to the Brazilian Information Systems on Live Births (SINASC).
The sample was stratified according to Brazil's five major geographic regions, location (State capital versus non-State capital), and type of facility according to funding of the deliveries (private, public, or mixed).Mixed facilities were defined as those listed as private in the National Registry of Healthcare Establishments, but which also had beds outsourced by the public sector.Together with the public facilities, these mixed facilities had the Brazilian Unified National Health System (SUS) as their funding source.
Six strata were generated for each of the five regions: location in State capitals (private/mixed/ public) and outside State capitals (private/mixed/ public).The final sample consisted of 30 strata.For each stratum, a two-stage probabilistic sample was selected.The healthcare establishments were selected in the first stage and the postpartum women and their infants in the second.
Sampling weights were based on the inverse probability of inclusion in the sample.To ensure that the total estimates were equal to the number of hospitals in the sample, in 2011 a calibration process was used in each stratum.The results shown are estimates for the study's total universe of hospitals (1,402), based on the sample of 266 hospitals visited.
To meet the study's objectives, in addition to the questionnaires applied to the 23,940 selected postpartum women, a questionnaire on hospital structure was completed by the field supervisors during interviews with sampled healthcare facility administrators.
The data collection instrument was developed according to the prevailing Brazilian legislation: RDC/Anvisa n. 36 June 3, 2008 15 21 .
Hospitals were classified as follow: according to obstetric volume or number of deliveries per year 22 , categorized as low (≤ 999 deliveries), medium (1,000 to 2,999), and high (≥ 3,000); existence of an adult and/or neonatal intensive care unit (ICU); provision of teaching activities; and whether the facility was a referral hospital for high-risk pregnancy, via a referral call center.
Questions on human resources verified whether there were head physicians and nurses with specialized training in obstetrics and neonatology.
According to the structure required by Brazilian legislation, the study verified the existence of emergency equipment for treating the mother (mechanical respirator/ventilator, manual resuscitator, laryngoscope, and endotracheal tube) and newborns (laryngoscope and neonatal endotracheal tube, valve-less neonatal suction catheters, meconium aspirator, aspirator with manometer and oxygen, gastric aspiration tube, and material for ventilation).The questionnaire also checked the existence of a blood bank or transfusion service, clinical pathology laboratory, and the availability of an ambulance for mothers and newborns.
The questionnaire also asked about the availability of the following drugs in the hospital: anti-hypertensive drugs, anxiolytics/hypnotics, steroids, oxytocin, uterine contraction inhibitors, coagulants/hemostatic drugs for the woman and newborn, and specifically magnesium sulfate (anticonvulsant), surfactant (to induce neonatal pulmonary maturation), solution or ointment for the prevention of neonatal conjunctivitis, and anti-D immunoglobulin for Rh-negative women.
The analysis included distribution of the relative frequency of the target variables according to type of financing (public, mixed, and private).Within each of these three strata, hospitals were grouped by similarity into three macro-regions: North/Northeast; South/Southeast, and Central.Finally, structure data were observed according to two groups of hospitals, those with higher complexity, defined as having a neonatal ICU with six or more beds, plus ICU beds for adults, while the rest were defined as having lower complexity.
The research project was approved by the Institutional Review Board of the National School of Public Health/Fiocruz (review n. 92/10).There was no conflict of interest with the research methods or any financial conflict of interest for the researchers.
Results
Of all the healthcare establishments studied, 36.1% were public, 45.7% mixed, and the rest private (18.2%).When analyzing the three macroregions, in the North/Northeast slightly more than half of the hospitals were public, compared to 43% in the Central and 23.5% in the South/ Southeast.Mixed hospitals accounted for 24.6% in the North/Northeast, 34% in the Central, and 60.9% in the South/Southeast.Private hospitals varied from 15.5% in the South/Southeast (the lowest proportion) to 23% in the Central, the highest.
According to Table 1, nearly 30% of the public and private maternity hospitals were located in State capitals, as compared to 13.4% of mixed hospitals.The pattern changed in the Central, with most public and mixed hospitals in the State capitals (63% and 68%, respectively), suggesting coverage problems outside the capital cities in this region.
The study also analyzed the obstetric volume or number of deliveries per maternity hospital.For the country as a whole, most hospitals performed a medium volume (from 1,000 to 2,999 deliveries per year).The exception was the Central region, where most facilities performed fewer deliveries, both in mixed (56%) and private hospitals (61%).
Table 1 also shows that hospitals with ICU beds were more common in the South/Southeast (69% of public, 67% of mixed, and 98% of private maternity hospitals) and were also more common in private hospitals (86%).The most common situation was to have both neonatal and adult ICU beds.
Teaching was conducted mostly in public (77%) and mixed hospitals (74%), and was especially common in hospitals in the Central (100% of public and 85% of mixed hospitals).
A specific question for public and mixed hospitals was whether they were referral facilities for high-risk pregnancy and were connected to a call center for high-risk beds.Public hospitals showed the highest proportion of high-risk referral facilities (35%), compared to 25% in mixed hospitals.In the South/Southeast, 56% of public hospitals and 30% of mixed hospitals received high-risk referrals.
Technical responsibility for care in the various specialties should generally fall to individuals with the appropriate leadership and training in order to keep the services up-to-date in terms of knowledge, technology, and other quality-of-care issues.Specialization should ensure that staff will manage these issues properly.As shown in Table 2, all three types of financing showed a lower proportion of head physicians and nurses with specialized training in obstetrics in the North/ Northeast.More head physicians had received specialized training in obstetrics when compared to head nurses.The difference was even greater in neonatology, ranging from 32% of head pediatricians in public maternity hospitals in the North/ Northeast and in mixed maternity hospitals in the Central to 100% of private hospitals in the North/Northeast.As for head nurses with specialized training in neonatology, the proportion ranged from 35% in public maternity hospitals in the North/Northeast to 82% in mixed facilities in the Central.The proportion of maternity hospitals where all four coordinators had specialized training (both head physicians and nurses in both obstetrics and neonatology) was higher in the South/Southeast and in public hospitals and was especially low in the North/Northeast, possibly due to the lack of such specialists in that macro-region.Table 3 shows the availability of essential and strategic equipment for maternal and neonatal survival in emergencies.For maternal emergencies, the availability was greater in private (99%) and mixed (89%) and lower in public hospitals (71%), with a greater need in the North/Northeast, where only 56% of public hospitals had such equipment.For neonatal emergencies as well, the availability was higher in private hospitals (88%), compared to 82% in mixed and 68% in public hospitals.Again, the gaps were greater in hospitals in the North/Northeast: only 45% of public hospitals and 64% of mixed hospitals had all the necessary equipment.The availability of a blood bank or transfusion service varied from 48% in mixed hospitals in the North/Northeast to 84% in mixed hospitals in the South/Southeast; overall, it was 75% in mixed, 69% in public, and 67% in private hospitals.Clinical pathology laboratories existed in 70% of mixed hospitals in the North/Northeast and 100% of public hospitals in the Central; the overall figures were 92% in public, 87% in private, and 85% in mixed hospitals.The availability of an ambulance for the woman varied from 50% in private hospitals in the North/Northeast to 100% in various regions and types of financing; overall, it was 97% in public, 88% in mixed, and 61% in private hospitals.Ambulance availability for the newborn varied from zero in private hospitals in the Central to 100% in public hospitals in the Central; overall, it was 67% in public, 51% in mixed, and 17% in private hospitals.
Regarding essential medicines, as shown in Table 4, the situation was the opposite, with lower proportions in the private sector, except for surfactant and coagulant/hemostatic drugs for the woman.Still, concerning the availability of all drugs listed as essential, there was a reversal, with the following rates: private (71%), mixed (59%), and public (43%).Again, the largest gaps appeared in the North/Northeast, where only 37% of public and 35% of mixed hospitals had the complete list.
Table 5 shows that hospitals with higher complexity, defined here as having six or more neonatal ICU beds plus adult ICU beds, comprised 30% of the public and mixed and 59% of the private hospitals.They were generally located in State capitals, especially in the case of public maternity hospitals (64%).There were proportionally more hospitals with higher complexity in the mixed financing category (80% in the North/Northeast and 64% in the South/Southeast) and in the private category (68% in the South/Southeast and 57% in the Central).Hospitals with higher complexity tended to have a medium obstetric volume, while those with lower complexity mostly performed fewer deliveries.Higher-complexity hospitals frequently included teaching activities, served as high-risk referral, and had head physicians and nurses with specialized training.These were also the hospitals that tended to have essential maternal and neonatal emergency equipment.Except for the private hospitals, the higher-complexity facilities were also more likely to have blood banks or transfusion services, clinical pathology laboratories, and ambulances for mothers and newborns.
Discussion
By producing an overview of key structure issues in Brazilian maternity hospitals, this study aimed to identify the potentialities and deficiencies of the country's health system in obstetric and neonatal care.This subject has drawn increasing attention from Brazilian researchers, given the country's persistently and unacceptably high maternal and perinatal mortality rates, despite the increasing coverage of in-hospital deliveries 4,10,22,23,24,25 .Although this article did not consider the quality of obstetric and neonatal care in the selected maternity hospitals, evidence of the association between professional staff supply and adequate setting for providing safe care for women and newborns and the occurrence of favorable outcomes reaffirm the importance of singly evaluating structure 12,26 .The study's sampling design allowed a more in-depth investigation of variations in the structure of maternity facilities according to type of financing and geographic location.
The study showed that the largest network of obstetric and neonatal care is outsourced by the SUS, corroborating similar studies in Rio de Janeiro 3,7 , Greater Metropolitan São Paulo 22 , and Santa Catarina State 27 .
For maternity hospitals with mixed financing, the study did not determine the proportions of users of the SUS versus clientele of private health plans or out-of-pocket users.However the results confirmed that the proportionally larger network of public maternity hospitals of SUS in the North/Northeast is due to the low population contingent covered by private health plans in that macro-region.Meanwhile, the concentration of the clientele covered by private health plans or paying out of pocket in the South/Southeast may indicate different patterns of health plans between the mixed and private maternity hospitals, besides expressing the organization of the supply in some locations with fewer public facilities, the need to hire private services, and the need for private facilities to complement their revenue through service provision agreements with the SUS.
The greater availability of healthcare facilities outsourced by the SUS outside the State capitals was expected, given the population's dispersal in large numbers of small cities and towns, especially in the North/Northeast.The different pattern in the Central region of Brazil is worrisome, with an over-concentration of maternity hospitals in the State capitals.Unlike other regions, in the South/Southeast nearly all of the maternity hospitals with mixed financing were located outside the State capitals, suggesting that in smaller cities the availability must be diversified for the two clienteles to avoid multiplying services, which would be cost-ineffective; meanwhile, the public hospitals were concentrated in the State capitals, with a similar distribution to that of the private sector.The percentages of private hospitals located outside the State capitals varied little between regions, suggesting a private network organized according to its own logic.
The analysis of maternity hospitals according to complexity (whether they had a neonatal ICU with six or more beds and an adult ICU) showed evidence of a difference in organization according to the three types of financing.The private network was better equipped, and there was no difference in the distribution of hospitals classified according to complexity between the State capitals, countryside, or region of the country.Most of the higher complexity public hospitals were located in the State capitals, with fewer in the countryside in the regions, especially in the North/Northeast.This suggests possible gaps for the population who have exclusive access to healthcare facilities through the SUS, and who may or may not be covered by mixed hospitals, of which the ones with higher complexity are concentrated in the countryside and with an important share in the North/Northeast of the country.Despite the study's inherent limitations, especially the lack of detailed data on the number of available beds for admissions and the size, demographic and social profile, and health needs of the childbearing-age and newborn population 10 , the results presented here emphasize the geographic inequality in the supply of hospital services in the SUS, especially hospitals with higher complexity.The findings also show healthcare gaps that can force patients to travel long distances for hospitalization to give birth in a context of limited support for pregnant women, thereby increasing the risk of infant death, as shown by Almeida & Szwarcwald 28 , in addition to confirming that the regionalization of hospital care is still a challenge for Brazil.
The indirect indicators of the degree of complexity in the study sample's maternity hospitals were the number of procedures performed, the existence of a neonatal ICU with at least six beds and/or an adult ICU, teaching activities, head physicians and nurses with specialized training in obstetrics and neonatology, and specifically for the public and mixed hospitals, being a referral hospital for high-risk pregnancies.
In relation to these characteristics, the results reconfirm the hospital network's heterogeneity.Public and mixed hospitals showed a greater supply of facilities with medium and high obstetric volume in the year 2007, where the highercomplexity hospitals were concentrated, which agrees with the tendency whereby a higher number of deliveries justifies expenditures on maintenance of equipment and staff that are trained in the use of sophisticated medical technology for managing emergency situations 23,29 .However, there were numerous public and mixed hospitals that performed more than a thousand deliveries in 2007 and that did not have an ICU.Meanwhile, in the private network, although there were more hospitals that performed fewer deliveries, facilities with an ICU were more common -which could be indicative of the need for intensive care for the newborns, associated with either high cesarean rates in this sector or the clientele's demands.
Many public and mixed hospitals conducted teaching activities, which could be indicative of more experienced staff and thus greater possibility for a positive impact on quality of care.With the assumption that head physicians and nurses with specialized training in obstetrics and neonatology could show greater clinical competence for decision-making to perform appropriate procedures 13,30 , the article simply listed the existence of a head physician and/or nurse and their academic degrees.Even so, the presence of head physicians and nurses in the obstetrics and neonatology services was low, especially those with specialized training, even in higher-complexity hospitals.The most dramatic situation was in public maternity hospitals in the North/ Northeast.In the other regions, head physicians and nurses were nearly two times as common in public and mixed maternity hospitals compared to the private network.
Another mechanism with the potential to expand access for patients that most need care was the regulation of hospitalization for delivery in the SUS, especially for high-risk pregnant women and newborns.
Higher-complexity public and mixed maternity hospitals predominated among those serving as high-risk referral facilities through hospital admissions call centers.Even so, a surprising percentage of these hospitals failed to inform that they served as referral facilities for other maternity hospitals, thus displaying a lack of organization in the network for high-risk pregnancies and neonatal care.Another important point was the existence of low-complexity facilities that identified themselves as referral hospitals for high-risk pregnancies.Of this total, 33% were located outside the State capitals in the Northeast.
The study identified major gaps in hospital structure that can jeopardize the quality of obstetric and neonatal care, potentially increasing adverse maternal and neonatal outcomes 12 .
The study showed that the minimum equipment for managing obstetric emergencies was reported as available in all hospitals in the private network and in all public and mixed facilities with higher complexity.As for neonatal emergency equipment, a significant proportion of hospitals failed to present the complete set of necessary equipment.This situation is worrisome, especially in lower-complexity public and mixed hospitals in the North and Northeast, which can further appear in the neonatal mortality rates.
Hemorrhage is one of the main causes of maternal death in Brazil, so it is worrisome that 40% of higher-complexity maternity hospitals in the private sector lack blood banks or transfusion services, especially considering their high surgery rates.The lack of blood transfusion capability in the hospital delays treatment in these cases 13 .
Although the availability of ambulances in maternity hospitals is necessary to guarantee timely hospitalization for adequate obstetric care, the study detected a critical situation, especially in the private sector.The situation was even worse for transferring newborns from lower-complexity maternity hospitals, potentially contributing to avoidable neonatal deaths, since the most common reason for transferring newborns is the need for neonatal intensive care 4,13 .
At the time of the interview, an important percentage of maternity hospitals reported not having one or more of the essential medicines available.The missing medicines included those for inducing pulmonary maturation in the newborn, interrupting hemorrhage, preventing Rhnegative alloimmunization, or preventing neonatal conjunctivitis.This scenario is problematic since it can directly increase rates of such complications as miscarriage, neonatal respiratory distress syndrome 31 , maternal and infant death, and Sheehan syndrome 32 .
The study showed a large proportion of poorly equipped maternity hospitals lacking specialized staff, and the results indicate that the distribution of higher-complexity hospitals is more unequal than that of lower-complexity facilities.Of all the regions, the North/Northeast, followed by the Central, showed the worst gaps and problems, especially in public and mixed maternity hospitals.In the South/Southeast, these hospitals had better structures, reaching similar or even higher proportions than in the private sector.The results indicate that an important share of mothers and newborns were exposed to unnecessary and avoidable risks.
Despite some uncertainties concerning the reliability of structure data provided by administrators of maternity hospitals in the sample (since the study's field supervisors did not directly verify the items in the data collection instrument), this choice guaranteed both participation by all the hospitals selected in the sample and a low non-response rate.Importantly, the availability of equipment and inputs does not necessarily mean that the women's health needs were met when they sought care at these facilities.
Even considering the study's limitations, the results provide backing for the debate on quality of hospital services in Brazil.They point to the need to continue the evaluation of hospital structure and develop analytical studies to explore the question of variation in hospital performance, which will require more detailed information on other aspects of hospital structure, the socioeconomic profile and case severity of the clientele, and the process of obstetric and neonatal care, based on applying questionnaires to postpartum women and retrieving data from patient files in the Birth in Brazil survey.
Finally, future studies should focus on the structure of regionalized perinatal care networks as the unit of analysis, since the issues of complexity, regulation, availability of blood banks and transfusion services, and others should be measured according to regional health needs, thus contributing to proposals for quality improvement and suggesting paths for the organization of regional healthcare networks 14 , from the perspective of backing the organization and operation of the SUS.
Table 1
Proportion of maternity hospitals according to type of financing and major geographic region, location in State capital, and key infrastructure characteristics.
C: Central; N: North; NE: Northeast; S: South; SE: Southeast; ICU: intensive care unit.* Values weighted according to sampling plan.
Table 2
Proportion of maternity hospitals according to type of financing, major geographic region, and head physicians and nurses with specialized training.
C: Central; N: North; NE: Northeast; S: South; SE: Southeast; ICU: intensive care unit.* Values weighted according to sampling plan.
Table 3
Proportion of maternity hospitals according to type of financing, major geographic region, availability of emergency equipment, blood bank, clinical pathology laboratory, and ambulances.Brazil, 2010 *.
C: Central; N: North; NE: Northeast; S: South; SE: Southeast.* Values weighted according to sampling plan.
Table 4
Proportion of maternity hospitals according to type of financing, major geographic region, and availability of medicines.Brazil, 2010 *.
C: Central; N: North; NE: Northeast; S: South; SE: Southeast.* Values weighted according to sampling plan.
Table 5
Proportion of maternity hospitals according to type of financing, level of complexity, location in State capital, and structure.Brazil, 2010 *.
* Values weighted according to sampling plan.
|
v3-fos-license
|
2022-10-19T15:52:35.105Z
|
2022-10-01T00:00:00.000
|
252984697
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2227-9067/9/10/1561/pdf?version=1665824149",
"pdf_hash": "50236c10f3951662ac18a33d4c4d5b4aa60fbe46",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:789",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "4b2a1aefff788499caa3ca99423ab3d4a5939411",
"year": 2022
}
|
pes2o/s2orc
|
The Joint Effect of Perceived Psychosocial Stress and Phthalate Exposure on Hormonal Concentrations during the Early Stage of Pregnancy: A Cross-Sectional Study
Phthalates alter the hormonal balance in humans during pregnancy, potentially affecting embryonic and fetal development. We studied the joint effect of exposure to phthalates, quantified by urinary phthalate metabolite concentration, and perceived psychological stress on the concentration of hormones in pregnant women (n = 90) from the Nitra region, Slovakia, up to the 15th week of pregnancy. We used high-performance liquid chromatography, tandem mass spectrometry (HPLC-MS/MS), and electro-chemiluminescence immunoassay to determine urinary concentrations of phthalates and serum concentrations of hormones, respectively. We used Cohen perceived stress scale (PSS) to evaluate the human perception of stressful situations. Our results showed that mono(carboxy-methyl-heptyl) phthalate (cx-MiNP) and a molar sum of di-iso-nonyl phthalate metabolites (ΣDiNP) were negatively associated with luteinizing hormone (LH) (p ≤ 0.05). Mono(hydroxy-methyl-octyl) phthalate (OH-MiNP) and the molar sum of high-molecular-weight phthalate metabolites (ΣHMWP) were positively associated with estradiol (p ≤ 0.05). PSS score was not significantly associated with hormonal concentrations. When the interaction effects of PSS score and monoethyl phthalate (MEP), cx-MiNP, ΣDiNP, and ΣHMWP on LH were analyzed, the associations were positive (p ≤ 0.05). Our cross-sectional study highlights that joint psychosocial stress and xenobiotic-induced stress caused by phthalates are associated with modulated concentrations of reproductive hormones in pregnant women.
Introduction
Prenatal development is a complex process regulated by genetic and hormonal factors and the environment of the mother and fetus [1]. The prenatal period, especially the early stage of pregnancy, is dependent on the maternal endocrine system [2]. The maternal endocrine system changes rapidly during pregnancy [3] and can be affected by various environmental factors, such as exposure to environmental chemicals with further adverse effects on the developing fetus [4].
Phthalates are chemicals used in the plastic industry to soften plastic materials [5]. They are primarily used in polyvinyl chloride products [6]. People are exposed to phthalates ubiquitously. They act as endocrine disruptors in the human body affecting the physiological hormonal balance of the organism [7], such as decreased maternal concentrations of testosterone [8], thyroid hormones [9], fetal concentrations of cortisol [10] as well as increased maternal concentrations of estradiol [8]. Moreover, phthalates can pass the placental barrier and affect fetal development and health [11], resulting in adverse pregnancy outcomes [12], as well as in numerous reproductive [13] and neurodevelopmental disorders of progeny [14].
Psychosocial stress during pregnancy is one of the most significant environmental factors inducing an imbalance in the maternal endocrine system. Stress is regulated by various physiological processes trying to maintain the dynamic balance of the organism. The essential constituent of the stress response is the hypothalamic-pituitary-adrenal (HPA) axis [15], regulated by the hypothalamic paraventricular nucleus. Neurons in that region secret corticotropin-releasing hormone (CRH), stimulating the secretion of adrenocorticotropic hormone (ACTH) in the anterior lobe of the pituitary gland. ACTH induces cortisol secretion in the adrenal gland. Cortisol in the bloodstream inhibits the further secretion of CRH and ACTH from the hypothalamus and pituitary gland by negative feedback [16]. However, the stress response does not affect only the secretion of cortisol. The HPA axis can be modulated by the activity of the hypothalamic-pituitary-thyroid (HPT) and hypothalamic-pituitary-gonadal (HPG) axes. CRH interacts with hypothalamic neurons secreting gonadotropin-releasing hormone (GnRH) and thyrotropin-releasing hormone (TRH), resulting in the inhibition of luteinizing hormone (LH) and thyroidstimulating hormone (TSH) secretion of the pituitary gland. This interaction is linked with decreased sex steroids and thyroid hormones [17,18]. Previous studies have observed the associations between perceived stress and modulated hormonal concentrations [19], resulting in preterm birth and low birth weight [20], as well as impairment of reproductive and neural health of progeny [21].
Although perceived psychosocial stress and exposure to chemicals are associated with similar adverse health outcomes, only a few studies have focused on the joint effect of xenobiotic-induced stress and psychosocial stress on women's health during pregnancy. We considered an investigation of such a combination of stressors as indicated due to the possible significant magnification of adverse health effects. Our study aimed to determine the association of joined effect of phthalate metabolites with perceived psychological stress (PSS) score on hormonal concentrations.
Study Population
The present cross-sectional study is a part of the Mother-Infant Study Cohort (PRENATAL), designed to investigate the association between maternal phthalate exposure and reproductive and neurobehavioral outcomes of progeny. The study population consisted of pregnant women up to the 15th week of pregnancy (n = 90) from the Nitra region, Slovakia. The research was conducted with the approval of the University Hospital Ethics Committee in Nitra. Participation was anonymous and voluntary, and all probands signed informed consent prior to involvement. The sample collection and exclusion criteria were described elsewhere [22].
The Questionnaire Method of Data Collection
A trained technician completed the questionnaires to obtain essential data on health conditions, previous pregnancies, and baseline characteristics during the early stage of the pregnancy visit. We used the Cohen perceived stress scale-10 (PSS-10) to evaluate the human perception of stressful situations. This scale contains ten questions, four are formulated positive, and six are negative. For each question, the proband chooses one of five possible answers: never, rarely, occasionally, quite often, and often. Each question is scored on a 5-point scale that ranges from never (0) to frequently (4). Positively formulated items are reversed. The final score for an average person without chronic stress or stressrelated illness is around 13 points. A stress-exposed person scores an average of 20 points or more [23]. Based on this score, the cohort of pregnant women was divided into two groups-low (≤19 points) and high stressed probands (≥20 points).
Qualitative and Quantitative Analysis of Phthalate Metabolites from Urine Spots
The qualitative and quantitative analysis of phthalate metabolites has been described elsewhere [24]. Briefly, we used high-performance liquid chromatography (HPLC) and tandem mass spectrometry (MS/MS) (Infinity 1260 and 6410 triplequad, Agilent, Santa Clara, CA, USA) to quantify the urinary concentration of 17 phthalate metabolites by the method built on the basis of previously published offline SPE and online HPLC-MS/MS methods [25,26]. The analysis was performed in Physiological Analytical Laboratory, Constantine the Philosopher University in Nitra. Our laboratory passed interlaboratory tests in the HBM4EU QA/QC program (HBM4EU). Internal quality control was performed by analyses of 2 control materials (a mixture of urine samples) with known concentrations (lower and higher concentrations). The limits of quantification (LOQ) were estimated based on the lowest quantifiable concentration of the standard in the calibration curve individually for each phthalate metabolite. LOQs were estimated between 1 and 2.5 ng/mL. Precursor and product ions and LOQs are shown elsewhere [24].
Statistics
For the values below the LOQ of phthalate metabolite concentrations, we imputed by taking the LOQ value divided by the square root of 2 if concentrations had <20% of samples below the LOQ and the LOQ divided by 2 if >20% of samples fell below the LOQ. Only those phthalate metabolites whose concentrations were at least in 70% of samples above the LOQ were included in the statistical analyses.
Pearson's correlation analysis, unpaired t-test, and one-way analysis of variance (ANOVA) were used to determine confounding variables from probands' baseline characteristics. We analyzed the following numeric variables: week of pregnancy at the time of sample collection, age, BMI, number of previous pregnancies, and nominal variables: sex of the child, active and passive smoking, education, and living area. The following significant (p ≤ 0.05) confounding variables were detected: week of pregnancy at the time of sample collection, age, BMI, and active and passive smoking. We created a Path diagram ( Figure 1) to visualize the potential associations between exposure (phthalate metabolites, perceived stress), outcome (hormonal concentrations), and confounding variables (age, week of pregnancy, BMI, active and passive smoking).
We first tested the main effects of phthalate metabolite concentrations and PSS score separately using multiple linear regression adjusted for confounders. Next, we used multiple linear regression to test whether phthalate metabolite concentrations interacted with the PSS score to predict the hormonal concentrations of pregnant women. For this purpose, we used guidelines for interaction effects provided by Aiken and West [28] described in Schreier et al. [29]. To visualize our results, we used general mixed models.
We divided our cohort into two groups based on the height of the PSS score (lower and higher PSS score). The associations between hormonal concentrations and concentrations of phthalate metabolites using general mixed models for each group were plotted in Figures 2-6. We used IBM SPSS Statistics (version 21.0; SPSS Inc., Chicago, IL, USA) and jamovi for statistical analysis. The effect size was considered statistically significant when the p ≤ 0.05. Figure 1. Path diagram explaining the potential associations between the concentrations of hormones, phthalate metabolites, PSS score and confounding variables (age, BMI, week of pregnancy, active and passive smoking). Solid arrow represents association with between the main variables (concentrations of hormones, phthalate metabolites, PSS score) or between confounding variable and main variables. Interrupted arrow represents association between confounding variables.
FOR PEER REVIEW 7 of 15
To examine the perception of psychosocial stress, we used a questionnaire examination method, namely the Cohen perceived stress scale (PSS), which consists of 10 questions. The final test score is the sum of points for all questions in the test. The higher the score, the greater the chance that the proband experiences a higher level of stress, which could be associated with a disturbance of hormonal balance. We analyzed the relationships between PSS score and hormonal concentrations using multiple linear regression adjusted for confounding variables (Table S1 in Supplementary Data). We did not observe any significant association between PSS score and hormonal concentrations in adjusted models.
We investigated whether log-transformed concentrations of phthalate metabolites and psychosocial stress interacted to affect the log-transformed hormonal concentrations in pregnant women by multiple linear regression adjusted for confounding variables (week of pregnancy at the time of sample collection, age, BMI, and active and passive smoking). There was significant PSS score × MEP (β = 0.218, p = 0.042), OH-MiNP Table S1 in Supplementary Data. As can be seen in Figures 2-6, there is an antagonistic effect of phthalate metabolites (MEP, OH-MiNP, cx-MiNP, ΣDiNP, ΣHMWP) on concentrations of LH based on the height of PSS score. In the group of probands with higher PSS score, there is a positive association between levels of phthalate metabolites and LH, while a negative association can be observed in the less stressed group of probands. When comparing the difference between the main effect of PSS score or phthalate metabolites separately and their interaction effect on hormonal concentrations, the asso-
Demographic Characteristics
The cohort (PRENATAL) consisted of 90 women up to the 15th week of pregnancy from the Nitra region, Slovakia. Their average age reached 30.80 ± 4.97 years, and the average week of gestation was 10.46 ± 1.80 weeks. Their average PSS score was 15.20 ± 4.82 points which is considered normal. The descriptive characteristics of our cohort are shown in Table 1.
Associations between Phthalates, Hormones, and Perceived Stress
We analyzed the relationships between log-transformed concentrations of phthalate metabolites and log-transformed hormonal concentrations using multiple linear regression adjusted for confounding variables (week of pregnancy at the time of sample collection, age, BMI, and active and passive smoking) (Table S1 in Supplementary Data).
To examine the perception of psychosocial stress, we used a questionnaire examination method, namely the Cohen perceived stress scale (PSS), which consists of 10 questions. The final test score is the sum of points for all questions in the test. The higher the score, the greater the chance that the proband experiences a higher level of stress, which could be associated with a disturbance of hormonal balance. We analyzed the relationships between PSS score and hormonal concentrations using multiple linear regression adjusted for confounding variables (Table S1 in Supplementary Data). We did not observe any significant association between PSS score and hormonal concentrations in adjusted models.
We Table S1 in Supplementary Data. As can be seen in Figures 2-6, there is an antagonistic effect of phthalate metabolites (MEP, OH-MiNP, cx-MiNP, ΣDiNP, ΣHMWP) on concentrations of LH based on the height of PSS score. In the group of probands with higher PSS score, there is a positive association between levels of phthalate metabolites and LH, while a negative association can be observed in the less stressed group of probands.
When comparing the difference between the main effect of PSS score or phthalate metabolites separately and their interaction effect on hormonal concentrations, the associations with estradiol disappeared. Contrary, more associations with LH appeared significant, but they changed their direction from negative to positive associations.
Discussion
Several studies have simultaneously examined xenobiotic-induced and psychosocial stress in pregnant women [29][30][31][32]. To our knowledge, only one examined the effects of such stressors on the modulation of hormone concentration in pregnant women [29]. Schreier et al. [29] noticed that higher mercury concentrations could result in decreased cortisol concentrations in the morning but only in stress-exposed pregnant women from Mexico City (n = 732). Our study probably is the first to examine the joint effect of phthalate metabolites and psychosocial stress on hormonal concentrations during pregnancy. It focuses on such relationships in view of the association between the health during pregnancy and postnatal health of the progeny with the maternal hormonal system.
A strong relationship between phthalate exposure and disruption of hormonal concentrations in pregnant women has been previously reported [8,33]. In addition to these observations, our data suggest that these relationships may be modified by perceived stress. Our study revealed positive and negative associations between the concentrations of phthalate metabolites and estradiol, LH, respectively. We also observed associations between the PSS score x phthalate metabolite interactions and concentration of LH.
Associations between Phthalate Exposure and Hormonal Concentrations
We observed a negative association between LH and cx-MiNP, ∑DiNP. Al-Saleh [34] showed non-significant positive associations between phthalate metabolites and levels of LH in Saudi women (n = 523) undergoing in vitro fertilization. Higher levels of oxo-MEHP were associated with higher LH in women (n = 58) and men (n = 48) aged 11-88 years from China during summer but not during winter [35]. Contrary, the study of Wen et al. [36] noticed the inverse association between DEHP metabolites and LH in pubertal boys and girls (n = 239) in Taiwan; however, this association was significant only in boys.
Our results have shown that OH-MiNP, ΣDiNP, and ΣHMWP were positively associated with estradiol levels. However, the results of other studies are inconsistent. According to Sathyanarayana et al. [8], MiBP, MBzP, MEHP, and oxo-MEHP were associated with increased estradiol levels during early pregnancy in pregnant women from the TIDES cohort (n = 591). On the contrary, Cao et al. [37] noticed an inverse association between LMWP metabolites and estradiol in women (n = 246) from China. Interestingly, the study of Johns et al. [33] reported non-significant positive and negative associations between estradiol and levels of LMWP and HMWP metabolites, respectively, in pregnant women from Puerto Rico (n = 106).
The inconsistencies in associations between phthalate metabolites and reproductive hormone concentrations (LH, estradiol) may be attributed to the various population groups in these studies. There is a difference in reproductive physiology in men and women, as well as in humans during puberty and adulthood [38]. This may lead to different associations between phthalate metabolites and reproductive hormones. The next reason for the inconsistent results may be the different estrogenic activity based on the group of phthalate diesters and their metabolites. The basic chemical structure of most phthalate metabolites is the same. It consists of the benzene ring. However, the metabolites differ in the side chain length, which could lead to different physicochemical properties and different mechanisms of toxicity in the body [39,40]. Phthalate exerting estrogenic activity, such as DiNP metabolites in our study, could stimulate the estrogen receptor or estradiol synthesis, leading to decreased LH via a negative feedback loop within the HPG axis. On the contrary, phthalate exerting anti-estrogenic activity could block the estrogen receptor or inhibit estradiol synthesis, leading to increased LH [41][42][43].
Joined Effect of Phthalate Exposure and Perceived Psychosocial Stress on Hormonal Concentrations
Although we did not find any significant associations between PSS score and hormones, we observed a significant positive association with LH when we evaluated the interaction between PSS score and phthalate metabolites. Surprisingly, when we evaluated the association between DiNP metabolites and LH separately without a PSS score, we noticed a negative association. In contrast, when we assessed the interaction PSS score × phthalate metabolites, the direction of the association with LH changed to a positive association. When we divided probands based on the PSS score into two groups (lower and higher PSS score), we observed an antagonistic effect of phthalate metabolites based on the height of the PSS score. In the group of probands with higher PSS score, there was a positive association between levels of phthalate metabolites and LH. In comparison, a negative association was observed in probands with a lower score.
Published studies on the effect of perceived psychosocial stress on the endocrine system showed inconsistent results. High stress levels during pregnancy were associated with increased serum cortisol and CRH concentrations [44]. However, Braig et al. [45] did not observe significant correlations between self-reported psychosocial stress and hair cortisol in women. Interestingly, Pruessner et al. [46] showed that chronic stress was associated with decreased cortisol concentration. Perceived stress, particularly chronic stress, can both decrease and increase cortisol concentration [46]. There are several reasons why chronic stress could be associated with elevated and decreased cortisol levels, such as cortisol depletion, lack of free cortisol, impaired cortisol secretion regulating hormones (ACTH, CRH), or modulated glucocorticoid receptor sensitivity [47]. The stress response involves not only the HPA axis but also HPG and HPT axes. CRH from the HPA axis inhibits HPG and HPT axes [18]. Chronic stress and higher cortisol levels are associated with fertility disorders in females, both in humans and animals, such as premature ovarian failure, which is linked with increased concentrations of FSH and decreased concentrations of LH, estradiol and testosterone [48][49][50]. We observed the opposite trend in probands with higher PSS score, who had lower cortisol and higher LH concentrations, compared to probands with lower PSS score. The study of Breen and Mellon [51] pointed to the inverse relationship between cortisol and LH. Higher cortisol levels directly inhibit pituitary gonadotropin levels, so we hypothesize that LH could not be inhibited in probands with higher PSS score due to lower cortisol concentrations compared to probands with lower PSS score. Our hypothesis could be confirmed by a study showing that high levels of gonadotropins were observed in subjects diagnosed with decreased cortisol levels without hormonal replacement therapy [52].
We assume that xenobiotic-induced stress represented by phthalate exposure and psychosocial stress share a similar target which is hormonal balance. Several plausible mechanisms of action of phthalates and psychosocial stress can be suggested. One of them is the modulation of the synthesis and metabolism of hormones, leading to changes in HPA, HPG, and HPT feedback loops [15,53]. We have shown that the joint effect of psychosocial stress and phthalate metabolites is associated with the modulation of LH. Interestingly, we observed a more significant effect of phthalates and PSS score in the interaction models compared to their separate main effects on LH concentrations. Several systematic and literature reviews have followed a similar pattern. Psychosocial and xenobiotic stress cause a more significant effect on health outcomes (e.g., birth weight, neurological parameters, obesity, respiratory diseases) compared to their individual effects [54][55][56][57]. A possible explanation for this synergism is that psychosocial stress increases the sensitivity of the organism to xenobiotics [58].
Although no study has examined the relationship between phthalate exposure, psychosocial stress, and hormonal concentrations during pregnancy, some studies lacking hormonal data have observed the effects of prenatal phthalate exposure and maternal stress on pregnancy outcomes and neonatal health. According to Ferguson et al. [32], exposure to stressful life events (SLEs) increased the significance of the association between exposure to DEHP during the third trimester of pregnancy and preterm birth (n = 783) in the TIDES cohort. Moreover, the TIDES cohort reported that exposure to SLEs during the first trimester of pregnancy (n = 738) was associated with non-significant positive relations between phthalate exposure and reproductive biomarkers (e.g., anogenital distance, anoscrotal distance, anopenile distance) in male newborns. On the contrary, in the group of pregnant women with no exposure to SLEs was observed significant negative associations between reproductive biomarkers and phthalate exposure in male newborns [31]. The opposite pattern was observed in the MIREC cohort [30], wherein the lower stressor group was noted the positive association between phthalate metabolites with androgen-disrupting activity and anopenile distance in male newborns (n = 147). Interestingly, in the MIREC cohort, there was a significant positive association between phthalate metabolites with androgen-disrupting activity and reproductive biomarkers in female newborns (n = 153) but only in the higher stressor group [30]. Pregnancy and newborn outcomes, such as birth timing or reproductive biomarkers, are also associated with prenatal hormonal concentrations exposure [31]. The maternal and fetal endocrine system strictly regulates prenatal development. Therefore, any modulation in hormonal concentrations during pregnancy can potentially lead to other adverse outcomes [1].
The current study has a cross-sectional design in which exposure and outcome are assessed simultaneously and only allows hypotheses to be formulated but cannot define causal relationships. Subsequent case-control or prospective cohort studies will be needed to validate our hypotheses and the results. The next limitation of our study is the size of the cohort. Therefore, verifying our findings on a larger cohort of pregnant women is necessary. On the other side, the main conclusions having crucial public health significance are supported by convincing statistics and methods for stress assessment. The strength of using Cohen's perceived stress test in our study is an interview approach by one training technician, which explained the items and questions that the subject might have otherwise misunderstood. Future research would benefit from including additional measures, such as physiological assessments, when assessing perceived stress. Additionally, using self-reported data introduces several limitations, such as response bias. Collecting only one urine sample to determine the concentration of phthalate metabolites during early pregnancy can also be considered a limitation of our study. However, some studies report that the concentrations of phthalate metabolites in repeated urine samples from a single proband were approximately in the same range [59]. It has also been confirmed that there was no significant difference in urinary phthalate metabolite concentrations in spot, morning void, 24 h or 48 h pooled urine samples [60,61].
Conclusions
We monitored the hormonal concentrations of pregnant women during the early stage of pregnancy in association with phthalate metabolites and perceived stress. Our results showed that OH-MiNP and ΣHMWP were positively associated with estradiol. Cx-MiNP and ΣDiNP were negatively associated with LH. PSS score was not significantly associated with hormonal concentrations. When the interaction effects of PSS score and MEP, cx-MiNP, ΣDiNP, and ΣHMWP on LH were analyzed, the associations were positive. We are the first to show that the joint effect of psychosocial stress and phthalate exposure in pregnant women is associated with a more significant modulation of the hormonal levels compared to the separate effects of phthalate metabolites and stress. During pregnancy, maternal hormonal balance is important for proper prenatal development [3]. Therefore, any modulation of hormonal balance (increase but also decrease in hormone concentration) due to exogenous factors can induce changes in maternal health and the health of future offspring [1]. Understanding the mechanisms by which the interaction between prenatal psychosocial stress and xenobiotic-induced stress may affect the endocrine system needs further study.
|
v3-fos-license
|
2022-04-14T15:16:27.750Z
|
2022-04-01T00:00:00.000
|
248139326
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1996-1944/15/8/2791/pdf",
"pdf_hash": "752f5cff9075210df086481142ea7e5a61856dd0",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:790",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "8f58928fbce3e650a60751450bba67554f8c5910",
"year": 2022
}
|
pes2o/s2orc
|
Bond Behavior of Concrete-Filled Steel Tube Mega Columns with Different Connectors
Concrete-filled steel tubes (CFSTs) are widely used in construction. To achieve composite action and take full advantage of the two materials, strain continuity at the steel–concrete interface is essential. When the concrete core and steel tube are not loaded simultaneously in regions such as beam or brace connections to the steel tubes of a CFST column, the steel–concrete bond plays a crucial role in load transfer. This study uses a validated finite-element model to investigate the bond-slip behavior between the steel tube and concrete in square CFST mega columns through a push-out analysis of eleven 1.2- × 1.2-m mega columns. The bond-slip behavior of CFST mega columns with and without mechanical connectors, including shear studs, rib plates, and connecting plates, is studied. The finite-element results indicate that the mechanical connectors substantially increased the maximum bond stress. Among the analyzed CFST mega columns, those with closely spaced shear studs and rib plate connectors with circular holes exhibited the highest bond stress, followed by plate connectors and widely spaced shear stud connectors. In the case of shear stud connectors, the stud diameter and spacing influenced the bond behavior more than the stud length. As the stud spacing decreased, the failure mode shifted from studs shearing off to outward buckling of the steel tube.
Introduction
Concrete-filled steel tubes (CFSTs) are widely used as columns. Construction with CFST columns begins with the erection of hollow-steel-tube columns and framing beams and braces, followed by concrete filling as construction progresses. Thus, the need for formwork is eliminated. In addition, CFST columns offer high strength, fire resistance, ductility, and high energy-dissipation capacity [1]. A steel tube enhances the strength and ductility of infilled concrete by reinforcing and confining it. Simultaneously, concrete prevents buckling of the steel tube and increases the overall stability [1,2]. Test results have shown that the strength and ductility of CFST columns are superior to those of individual components, and the ultimate strength is even greater than the sum of the ultimate strengths of the individual materials [1].
Stress transfer and strain continuity between steel and concrete are required to achieve structural benefits and attain composite action [2,3]. Strain continuity and composite action can be guaranteed when the concrete and steel tube are simultaneously loaded [4]. However, when structural members, such as beams and braces, are attached to a steel tube, sufficient steel-concrete bond stress is required to ensure force transfer and strain continuity [2].
The bond behavior of CFST columns has been widely studied [5][6][7][8][9][10]. The push-out test introduced by Dowling [11] is mainly used to study the bond behavior of CSFT columns. Numerous push-out tests conducted indicate that the bond stress behavior depends on The bond behavior of CFST columns has been widely studied [5][6][7][8][9][10]. The push-out test introduced by Dowling [11] is mainly used to study the bond behavior of CSFT columns. Numerous push-out tests conducted indicate that the bond stress behavior depends on the interface type, concrete grade, shape, and size of the cross-section, and the source of the steel-concrete bond is primarily a result of the chemical adhesion of the cement gel, mechanical connectors at the interface, and frictional force [12,13].
Although numerous CFST bond behavior studies have been conducted, most previous push-out tests have been conducted on small cross-sections [13] and may not represent CFST mega columns typically used in high-rise construction. Figure 1 shows a typical super-structure system incorporating mega columns, and the cross-section of mega columns typically exceeds 1 m. In this study, the bond stress behavior of 1.2-× 1.2-m mega CFST square columns was explored through push-out analysis using a validated finiteelement model, and comparison with current practice codes was made.
Parametric Finite-Element Analysis
The parametric variables cover a comprehensive set of connector types: shear stud connectors, rib plate connectors, rib plate connectors with circular holes, rib plate connectors combined with shear studs, and rib plate connectors in combination with connecting plates. To obtain a realistic and representative mega CFST column size and the material grades, the adapted mega CFST column was extracted from a skyscraper project. The steel tube in the adopted CFST mega column was 1200 × 1200 mm and had a wall thickness of 25 mm. Moreover, the steel tube and connecting ribs were fabricated using SM460 steel with a nominal yield and ultimate strength of 460 and 570 MPa, respectively. The steel grade used for all other steel components was SM355 with a specified minimum yield and ultimate strength of 355 and 470 MPa, respectively. The shear studs used as connectors were HS1 studs with a yield and ultimate strength of 235 and 400 MPa, respectively.
Parametric Finite-Element Analysis
The parametric variables cover a comprehensive set of connector types: shear stud connectors, rib plate connectors, rib plate connectors with circular holes, rib plate connectors combined with shear studs, and rib plate connectors in combination with connecting plates. To obtain a realistic and representative mega CFST column size and the material grades, the adapted mega CFST column was extracted from a skyscraper project. The steel tube in the adopted CFST mega column was 1200 × 1200 mm and had a wall thickness of 25 mm. Moreover, the steel tube and connecting ribs were fabricated using SM460 steel with a nominal yield and ultimate strength of 460 and 570 MPa, respectively. The steel grade used for all other steel components was SM355 with a specified minimum yield and ultimate strength of 355 and 470 MPa, respectively. The shear studs used as connectors were HS1 studs with a yield and ultimate strength of 235 and 400 MPa, respectively. The adopted connector types and variations of the investigated parameters are summarized in Figures 2-6 and Tables 1-3. The adopted connector types are grouped into five, and the first group includes shear stud connectors. Figures 2 and 3 depict a CFST mega column with shear stud connectors on four and two parallel faces, respectively. The varied parameters for the stud connectors include the stud spacing, stud diameter, and stud length. The adopted variations are summarized in Table 1, with the model names indicating the stud arrangement and the values of the investigated parameters. S4 and S2 at the beginning of the model names indicate the shear studs on all four faces and two parallel faces, respectively. The numbers following Sp, D, and L in the model names represent stud spacing, stud diameter, and stud length in millimeters, respectively. For example, S4-Sp300D19L100 means studs on all four faces at a spacing of 300 mm, and each stud has a diameter of 19 mm and length of 100 mm. The adopted connector types and variations of the investigated parameters are summarized in Figures 2-6 and Tables 1-3. The adopted connector types are grouped into five, and the first group includes shear stud connectors. Figures 2 and 3 depict a CFST mega column with shear stud connectors on four and two parallel faces, respectively. The varied parameters for the stud connectors include the stud spacing, stud diameter, and stud length. The adopted variations are summarized in Table 1, with the model names indicating the stud arrangement and the values of the investigated parameters. S4 and S2 at the beginning of the model names indicate the shear studs on all four faces and two parallel faces, respectively. The numbers following Sp, D, and L in the model names represent stud spacing, stud diameter, and stud length in millimeters, respectively. For example, S4-Sp300D19L100 means studs on all four faces at a spacing of 300 mm, and each stud has a diameter of 19 mm and length of 100 mm. Similarly, the CFST columns with rib plate connectors are shown in Figure 4, and the values of the investigated parameters are listed in Table 2. The names of CFST columns with rib plate connectors start with an R, followed by the designation of the hole diameter provided on the ribs, as illustrated in Figure 3. The circular holes in all rib plates are spaced at 300 mm, with the first hole starting 150 mm from the top. A rib plate connector with no holes is also included in this category and designated as having a zero-hole diameter (R-HD0). The adopted connector types and variations of the investigated parameters are summarized in Figures 2-6 and Tables 1-3. The adopted connector types are grouped into five, and the first group includes shear stud connectors. Figures 2 and 3 depict a CFST mega column with shear stud connectors on four and two parallel faces, respectively. The varied parameters for the stud connectors include the stud spacing, stud diameter, and stud length. The adopted variations are summarized in Table 1, with the model names indicating the stud arrangement and the values of the investigated parameters. S4 and S2 at the beginning of the model names indicate the shear studs on all four faces and two parallel faces, respectively. The numbers following Sp, D, and L in the model names represent stud spacing, stud diameter, and stud length in millimeters, respectively. For example, S4-Sp300D19L100 means studs on all four faces at a spacing of 300 mm, and each stud has a diameter of 19 mm and length of 100 mm. Similarly, the CFST columns with rib plate connectors are shown in Figure 4, and the values of the investigated parameters are listed in Table 2. The names of CFST columns with rib plate connectors start with an R, followed by the designation of the hole diameter provided on the ribs, as illustrated in Figure 3. The circular holes in all rib plates are spaced at 300 mm, with the first hole starting 150 mm from the top. A rib plate connector with no holes is also included in this category and designated as having a zero-hole diameter (R-HD0). connectors, rib plates, and connection plates, as illustrated in Figures 5 and 6. The geometric details of these models are summarized in Table 3, with the start of the model names indicating the combination of connectors used. The model S4-R represents a CFST mega column with shear stud connectors on four faces together with rib connectors. Likewise, the model S4-R-Cp indicates that studs on four faces, rib plates, and connection plates were used in combination as a connector. The stud spacing, diameter, and length in these two models were 300, 19, and 150 mm, respectively. Similarly, the CFST columns with rib plate connectors are shown in Figure 4, and the values of the investigated parameters are listed in Table 2. The names of CFST columns with rib plate connectors start with an R, followed by the designation of the hole diameter provided on the ribs, as illustrated in Figure 3. The circular holes in all rib plates are spaced at 300 mm, with the first hole starting 150 mm from the top. A rib plate connector with no holes is also included in this category and designated as having a zero-hole diameter (R-HD0).
The final group of CFST mega columns analyzed incorporates a combination of stud connectors, rib plates, and connection plates, as illustrated in Figures 5 and 6. The geometric details of these models are summarized in Table 3, with the start of the model names indicating the combination of connectors used. The model S4-R represents a CFST mega column with shear stud connectors on four faces together with rib connectors. Likewise, the model S4-R-Cp indicates that studs on four faces, rib plates, and connection plates were used in combination as a connector. The stud spacing, diameter, and length in these two models were 300, 19, and 150 mm, respectively.
Finite-Element Modeling
Finite-element models of the mega CFST columns were formulated using ABAQUS version 2017 [14]. The geometric and material nonlinearities were considered in this analysis. Push-out analysis was conducted by restraining the three translational degrees of freedom of the steel tube base and applying a displacement control load at the top surface of the concrete, as shown in Figure 7. Since only the steel tube at the bottom is supported, and the load is applied to the top surface of the concrete, the applied load is transferred from the concrete to the steel tube via bond stress at the steel-concrete interface. The concrete and studs were meshed using reduced integration brick elements with reduced integration and hourglass control, i.e., C3D ponents were meshed using reduced integration 20-node quadratic bric ture geometric nonlinearities better. Hexagonal elements with an avera were used to mesh the finite-element models. The finite-element model Figure 7.
The material property of the steel components was modeled as b hardening using the respective yield and ultimate strength of the steel ultimate strain was assumed to be 0.2. For the elastic range, an elastic m son's ratio of 200 GPa and 0.3, respectively, were adopted.
The uniaxial compressive stress-strain relationship of the confined modeled using the Drucker-Prager hardening rule by utilizing the ma model for confined concrete [15]. The adopted constitutive model is illu 8. The unconfined concrete stress-strain relation along with the compre and the corresponding strain are shown in red. Here, was take analysis based on the ACI 318 [16] recommendation. When concrete is fining pressure, the compressive strength and the corresponding str than those of unconfined concrete, as illustrated in Figure 8 [15]. The and the corresponding strain are related by Equations (1) and (2), respe The concrete and studs were meshed using reduced integration eight-node linear brick elements with reduced integration and hourglass control, i.e., C3D8R. The steel components were meshed using reduced integration 20-node quadratic brick elements to capture geometric nonlinearities better. Hexagonal elements with an average size of 30 mm were used to mesh the finite-element models. The finite-element model mesh is shown in Figure 7.
The material property of the steel components was modeled as bilinear kinematic hardening using the respective yield and ultimate strength of the steel components. The ultimate strain was assumed to be 0.2. For the elastic range, an elastic modulus and Poisson's ratio of 200 GPa and 0.3, respectively, were adopted.
The uniaxial compressive stress-strain relationship of the confined C70 concrete was modeled using the Drucker-Prager hardening rule by utilizing the material constitutive model for confined concrete [15]. The adopted constitutive model is illustrated in Figure 8. The unconfined concrete stress-strain relation along with the compressive strength f c and the corresponding strain ε c are shown in red. Here, ε c was taken as 0.003 in the analysis based on the ACI 318 [16] recommendation. When concrete is subjected to confining pressure, the compressive strength f cc and the corresponding strain ε cc are higher than those of unconfined concrete, as illustrated in Figure 8 [15]. The confined strength and the corresponding strain are related by Equations (1) and (2), respectively [17].
where k 1 and k 2 are constants determined experimentally [15]. The constants k 1 and k 2 were set as 4.1 and 20.5, respectively, based on the study by Richart et al. [18]. Here, f 1 denotes the confining pressure, which was taken as zero for the size and shape of the mega column analyzed based on the empirical formulation from a previous study [15]. In other words, the compressive strength and corresponding strain did not increase owing to confinement.
Only the strength degradation, as illustrated in Figure 8, was altered due to confinement. The material degradation parameter k 3 depends on the shape and width-to-thickness ratio of the confining steel tube, and it was taken as 0.49 based on the empirical formulation by Hu et al. [15].
E c = 4700 f cc (5) were used to mesh the finite-element models. The finite-element mode Figure 7. The material property of the steel components was modeled as hardening using the respective yield and ultimate strength of the stee ultimate strain was assumed to be 0.2. For the elastic range, an elastic son's ratio of 200 GPa and 0.3, respectively, were adopted.
The uniaxial compressive stress-strain relationship of the confine modeled using the Drucker-Prager hardening rule by utilizing the m model for confined concrete [15]. The adopted constitutive model is il 8. The unconfined concrete stress-strain relation along with the comp and the corresponding strain are shown in red. Here, was tak analysis based on the ACI 318 [16] recommendation. When concrete i fining pressure, the compressive strength and the corresponding s than those of unconfined concrete, as illustrated in Figure 8 [15]. The and the corresponding strain are related by Equations (1) and (2), resp When plastic deformation occurs, certain parameters should dictate the yield surface's expansion. Therefore, once the confined compressive strength ( f cc ) and corresponding strain (ε cc ) were determined, the uniaxial compressive stress-strain relationship was formulated using Equations (3)-(5) [15], where f c , ε c , and E c represent the uniaxial compressive stress, strain, and elastic modulus, respectively. R E in Equation (4) represent the ratio of the initial modulus to the secant modulus at f cc . Equation (5), adapted from ACI 318 [16], was used to calculate the initial elastic modulus. The constants R σ and R ε are parameters dependent on the descending branch of the stress-strain curve and highly test dependent. In this study, R σ and R ε were taken as 4 based on Hu et al. [19].
The steel-concrete contact, i.e., the contact between the concrete and the inside walls of the steel tubes and ribs, was modeled as a hard contact with no penetration in the normal direction. The contact behavior in the tangential direction was modeled with surface-based cohesive elements defined by a traction separation relation on the tube-concrete interface with a damage mechanism [14,20]. In this study, the stiffness of the cohesive elements in the two tangential directions was assumed to be uncoupled and equal. Moreover, a cohesive element stiffens of 0.55 MPa/mm was determined to reflect the initial stiffness of the push-out test result by Tao et al. [13]. The damage initiation criterion was defined by limiting the maximum slip before decohesion commences, and when the damage initiation criterion is met, the cohesive element is degraded. The maximum slip before decohesion was determined to be 0.072 mm based on the test results of Tao et al. [13]. Following the decohesion, the tangential contact property was defined using a penalty friction formulation with a coefficient of friction of 0.25.
The interaction of the concrete with the stud connectors and connecting plates was modeled using the ABAQUS embedded region constraint [14], with the concrete as a host region and the studs and connecting plates as an embedded region to make force transfer possible.
CFST column push-out tests conducted by Tao et al. [13] showed that CFST columns with shear studs failed because the studs sheared off at the stud-tube weld while the weld remained intact. To incorporate this phenomenon in the finite-element model, the stud-steel-tube weld was modeled as a cohesive element with a 650 MPa/mm stiffness, and decohesion initiates after a 1.35-mm slip. The cohesive element stiffness and decohesion slip were calibrated to match the experimental observations by Tao et al. [13].
Validation of Finite-Element Model
The accuracy of the finite-element modeling assumptions for the CFST mega column was validated using the experimental results obtained by Tao et al. [13]. In Figure 9, the finite-element prediction and test results of a CFST column without a connector (nominal interface) and with shear stud connectors are compared. As shown in Figure 9a, the adopted cohesive element formulation reflects the bond stress-slip relation when mechanical connectors are not used. Moreover, as shown in Figure 9b, the finite-element model reflects the stud shearing-off phenomenon observed in the tests, and the bond stress-slip relation agrees well with the test result until the maximum bond stress develops.
Validation of Finite-Element Model
The accuracy of the finite-element modeling assumptions for the CFST mega column was validated using the experimental results obtained by Tao et al. [13]. In Figure 9, the finite-element prediction and test results of a CFST column without a connector (nominal interface) and with shear stud connectors are compared. As shown in Figure 9a, the adopted cohesive element formulation reflects the bond stress-slip relation when mechanical connectors are not used. Moreover, as shown in Figure 9b, the finite-element model reflects the stud shearing-off phenomenon observed in the tests, and the bond stress-slip relation agrees well with the test result until the maximum bond stress develops.
(a) (b) Figure 9. Comparison of test and finite-element method results: (a) square CFST without stud connectors and (b) square CFST with stud connectors.
Discussion of Results
The bond stress-slip relations obtained from the finite-element analysis and the different observed failure modes are illustrated in Figures 10-16. Here, the bond stress is calculated as the ratio of the applied load to the steel tube-concrete contact area, and the slip is calculated as the vertical displacement of the concrete relative to the steel tube. As the graph in Figure 10 reveals, the S2-Sp100D19L150 specimen developed the highest bond stress compared with the remaining CFST columns analyzed. The S2-Sp100D19L150
Discussion of Results
The bond stress-slip relations obtained from the finite-element analysis and the different observed failure modes are illustrated in Figures 10-16. Here, the bond stress is calculated as the ratio of the applied load to the steel tube-concrete contact area, and the slip is calculated as the vertical displacement of the concrete relative to the steel tube. As the graph in Figure 10 reveals, the S2-Sp100D19L150 specimen developed the highest bond stress compared with the remaining CFST columns analyzed. The S2-Sp100D19L150 model failed because of outward buckling of the steel tube accompanied by shear stud failure around the buckled region, as shown in Figure 11. Compared with the steel tube face with studs, the stress and buckling deformation were more pronounced on the steel tube face without stud connectors. In contrast, the stress distribution was uniform in the S4 models with studs on all the faces of the steel tube. All the S4 models analyzed failed by shear-stud-steel-plate connection failure, consistent with the experimental observations of Hu et al. [15]. In the S4 models with the same stud spacing and stud diameter, varying the stud length from 100 to 200 mm did not affect the bond stress-slip behavior. In contrast, increasing the diameter of the studs by 31.5% while keeping the spacing and length constant resulted in a 70.5% increase in the maximum bond strength. As shown in Figure 10, the maximum bond stresses achieved by the S4 models with diameters of 19 and 25 studs were 41% and 24%, respectively, compared with the S2 model. The S2 model showed higher maximum bond stress than the S4 models because of the many studs. Forty-eight studs were used in each of the S4 models, whereas 308 studs were used in the S2 model. In contrast, the stress distribution was uniform in the S4 models with studs on all the faces of the steel tube. All the S4 models analyzed failed by shear-stud-steel-plate connection failure, consistent with the experimental observations of Hu et al. [15]. In the S4 models with the same stud spacing and stud diameter, varying the stud length from 100 to 200 mm did not affect the bond stress-slip behavior. In contrast, increasing the diameter of the studs by 31.5% while keeping the spacing and length constant resulted in a 70.5% increase in the maximum bond strength. As shown in Figure 10, the maximum bond stresses achieved by the S4 models with diameters of 19 and 25 studs were 41% and 24%, respectively, compared with the S2 model. The S2 model showed higher maximum bond stress than the S4 models because of the many studs. Forty-eight studs were used in each of the S4 models, whereas 308 studs were used in the S2 model. The S2 specimen failed by steel tube outward buckling and showed considerable strength and stiffness beyond the proportionality limit. However, the bond stress was lost In contrast, the stress distribution was uniform in the S4 models with studs on all the faces of the steel tube. All the S4 models analyzed failed by shear-stud-steel-plate connection failure, consistent with the experimental observations of Hu et al. [15]. In the S4 models with the same stud spacing and stud diameter, varying the stud length from 100 to 200 mm did not affect the bond stress-slip behavior. In contrast, increasing the diameter of the studs by 31.5% while keeping the spacing and length constant resulted in a 70.5% increase in the maximum bond strength. As shown in Figure 10, the maximum bond stresses achieved by the S4 models with diameters of 19 and 25 studs were 41% and 24%, respectively, compared with the S2 model. The S2 model showed higher maximum bond stress than the S4 models because of the many studs. Forty-eight studs were used in each of the S4 models, whereas 308 studs were used in the S2 model. Next to the S2 model, the rib plate connectors with circular holes (R models) exhibited the highest bond stress. The two R models with circular holes displayed distinct bond stress-slip relationships depending on the hole diameter. Nonetheless, both achieved a maximum bond stress of 68% that of S2, as illustrated in Figure 10. The bond stress in the R-HD75 model started degrading after attaining the maximum bond stress because of the plastic deformation of the rib plate and concrete damage around the circular holes, as shown in Figure 13. Conversely, such bond stress degradation was not observed in the R-HD125 model up to a 30-mm slip, and a 30-mm slip was the maximum considered in the analysis. Incorporating shear studs together with rib plates, as in S4-R, gave a maximum bond stress comparable to the S4 models by achieving a maximum bond stress 21% that of the S2 model, as shown in Figure 10. Despite the maximum bond stress being comparable, the in the S4 models with fewer studs once the maximum bond stress was reached. These results are consistent with the experimental observations [15]. Next to the S2 model, the rib plate connectors with circular holes (R models) exhibited the highest bond stress. The two R models with circular holes displayed distinct bond stress-slip relationships depending on the hole diameter. Nonetheless, both achieved a maximum bond stress of 68% that of S2, as illustrated in Figure 10. The bond stress in the R-HD75 model started degrading after attaining the maximum bond stress because of the plastic deformation of the rib plate and concrete damage around the circular holes, as shown in Figure 13. Conversely, such bond stress degradation was not observed in the R-HD125 model up to a 30-mm slip, and a 30-mm slip was the maximum considered in the analysis. Incorporating shear studs together with rib plates, as in S4-R, gave a maximum bond stress comparable to the S4 models by achieving a maximum bond stress 21% that of the S2 model, as shown in Figure 10. Despite the maximum bond stress being comparable, the bond stress-slip relationship and the stress level in the steel tube differ. The initial stiffness The S2 specimen failed by steel tube outward buckling and showed considerable strength and stiffness beyond the proportionality limit. However, the bond stress was lost in the S4 models with fewer studs once the maximum bond stress was reached. These results are consistent with the experimental observations [15].
Next to the S2 model, the rib plate connectors with circular holes (R models) exhibited the highest bond stress. The two R models with circular holes displayed distinct bond stress-slip relationships depending on the hole diameter. Nonetheless, both achieved a maximum bond stress of 68% that of S2, as illustrated in Figure 10. The bond stress in the R-HD75 model started degrading after attaining the maximum bond stress because of the plastic deformation of the rib plate and concrete damage around the circular holes, as shown in Figure 13. Conversely, such bond stress degradation was not observed in the R-HD125 model up to a 30-mm slip, and a 30-mm slip was the maximum considered in the analysis. of the bond stress-slip curve was lower than that of the S4 models. Moreover, the steel tube stress in the S4-R was higher than that in the S4 models. Similar to the S4 models, the S4-R model failed because of shear studs shearing off the steel tube and rib plates. Connecting plates with rib plates and shear studs, as in S4-R-Cp, improved the bond stress performance beyond the proportionality limit with no bond stress degradation up to a 30-mm slip. The maximum bond stress achieved was 66% of that of the S2 model. As shown in Figure 10, the maximum bond stress reached by the S4-R-Cp model is comparable to that of the R models with circular holes. However, S4-R-Cp reached the proportionality limit at bond stress which was 45.3% that of the R-HD125 model. Finally, the R-HD0 model and the model without connectors exhibited the lowest bond stresses, as illustrated in Figure 10. The bond stress in these two models relied on the cohesion between the concrete and steel tube/rib plates. As a result, the bond stress in these models was lost when decohesion occurred, as shown in Figure 16. The addition of rib plates increased the steel-concrete contact area; however, the bond strength improvement was negligible. Table 4 summarizes the key parameters obtained from the analysis and the failure modes. of the bond stress-slip curve was lower than that of the S4 models. Moreover, the steel tube stress in the S4-R was higher than that in the S4 models. Similar to the S4 models, the S4-R model failed because of shear studs shearing off the steel tube and rib plates. Connecting plates with rib plates and shear studs, as in S4-R-Cp, improved the bond stress performance beyond the proportionality limit with no bond stress degradation up to a 30-mm slip. The maximum bond stress achieved was 66% of that of the S2 model. As shown in Figure 10, the maximum bond stress reached by the S4-R-Cp model is comparable to that of the R models with circular holes. However, S4-R-Cp reached the proportionality limit at bond stress which was 45.3% that of the R-HD125 model. Finally, the R-HD0 model and the model without connectors exhibited the lowest bond stresses, as illustrated in Figure 10. The bond stress in these two models relied on the cohesion between the concrete and steel tube/rib plates. As a result, the bond stress in these models was lost when decohesion occurred, as shown in Figure 16. The addition of rib plates increased the steel-concrete contact area; however, the bond strength improvement was negligible. Table 4 summarizes the key parameters obtained from the analysis and the failure modes. Incorporating shear studs together with rib plates, as in S4-R, gave a maximum bond stress comparable to the S4 models by achieving a maximum bond stress 21% that of the S2 model, as shown in Figure 10. Despite the maximum bond stress being comparable, the bond stress-slip relationship and the stress level in the steel tube differ. The initial stiffness of the bond stress-slip curve was lower than that of the S4 models. Moreover, the steel tube stress in the S4-R was higher than that in the S4 models. Similar to the S4 models, the S4-R model failed because of shear studs shearing off the steel tube and rib plates.
Connecting plates with rib plates and shear studs, as in S4-R-Cp, improved the bond stress performance beyond the proportionality limit with no bond stress degradation up to a 30-mm slip. The maximum bond stress achieved was 66% of that of the S2 model. As shown in Figure 10, the maximum bond stress reached by the S4-R-Cp model is comparable to that of the R models with circular holes. However, S4-R-Cp reached the proportionality limit at bond stress which was 45.3% that of the R-HD125 model. The efficiency of each model was examined by comparing the maximum bond stress and volume of connectors used. As shown in Table 4, the S4 models with 19 mm stud diameter exhibited similar maximum bond stress despite the stud length variation. On the contrary, increasing the cross-sectional stud area by 1.73 times while keeping the stud length and spacing constant resulted in a maximum bond stress increase by a factor of 1.72. The most efficient connector type with the highest bond stress per volume of connector used was the stud connector distributed on four faces (S4), followed by stud connectors on two parallel faces (S2), rib plate connectors with circular holes, connecting plate, rib plate with studs (S4R), and rib plate without holes, in that order.
The maximum bond strengths obtained from the analysis were compared to the requirements of current practice codes. The minimum bond strength requirement of the Chinese code DBJ/T 13-54-2010 [21] and Japanese code AIJ [22] is 0.225 MPa. The British code BS 5400-5 [23] and European code EN1994 [24] require a minimum bond strength of 0.4 and 0.55 MPa, respectively. Thus, all the analyzed mega columns with mechanical connectors satisfied the requirements of the four codes [21][22][23][24], whereas the two models that relied on cohesion failed to fulfill the requirements. Finally, the R-HD0 model and the model without connectors exhibited the lowest bond stresses, as illustrated in Figure 10. The bond stress in these two models relied on the cohesion between the concrete and steel tube/rib plates. As a result, the bond stress in these models was lost when decohesion occurred, as shown in Figure 16. The addition of rib plates increased the steel-concrete contact area; however, the bond strength improvement was negligible. Table 4 summarizes the key parameters obtained from the analysis and the failure modes. The efficiency of each model was examined by comparing the maximum bond stress and volume of connectors used. As shown in Table 4, the S4 models with 19 mm stud diameter exhibited similar maximum bond stress despite the stud length variation. On the contrary, increasing the cross-sectional stud area by 1.73 times while keeping the stud length and spacing constant resulted in a maximum bond stress increase by a factor of 1.72. The most efficient connector type with the highest bond stress per volume of connector used was the stud connector distributed on four faces (S4), followed by stud connectors on two parallel faces (S2), rib plate connectors with circular holes, connecting plate, rib plate with studs (S4R), and rib plate without holes, in that order.
The maximum bond strengths obtained from the analysis were compared to the requirements of current practice codes. The minimum bond strength requirement of the Chinese code DBJ/T 13-54-2010 [21] and Japanese code AIJ [22] is 0.225 MPa. The British code BS 5400-5 [23] and European code EN1994 [24] require a minimum bond strength of 0.4 and 0.55 MPa, respectively. Thus, all the analyzed mega columns with mechanical connectors satisfied the requirements of the four codes [21][22][23][24], whereas the two models that relied on cohesion failed to fulfill the requirements.
Conclusions
The bond strength between the steel tubes and concrete in concrete-filled steel tube mega columns was investigated using a validated finite-element model. The finite-element investigation included 11 identical 1.2-× 1.2-m CFST mega columns with different types and arrangements of steel tubes and concrete connectors. The following conclusions can be drawn based on the finite-element results.
1.
The maximum bond stress achieved by the CFST mega column models with mechanical connectors satisfies the minimum requirements of DBJ/T, AIJ, BS5400-5, and EN 1994 while the minimum requirement of the four codes was not met by the mega columns that relied on cohesion only.
3.
Although S2-Sp100D19L150 exhibited high bond stress, nonlinearity started early compared with other models that exhibited high bond stress. 4.
The rib plate connectors with circular holes exhibited both a high maximum bond stress and a high bond stress before losing linearity.
5.
Increasing the stud length had a negligible effect on the bond performance for the same number of studs. However, increasing the stud diameter resulted in improved bond performance. 6.
The use of closely spaced studs, rib plates with circular holes, and connecting plates that run between the parallel walls of the CFST resulted in a considerable slip before the strength degradation commenced. Moreover, the bond stiffnesses of these three connector types were on the same order. 7.
Increasing the circular hole diameter from 75 to 125 mm in the CFST columns with rib connectors improved the bond strength, stiffness, and maximum bond stress before the proportionality limit. 8.
The CFST columns that relied solely on the cohesion between steel and concrete (CFST without connectors and R-HD0) showed the poorest performance. 9.
The stud connectors followed by the rib plate connectors with circular holes were the most efficient with respect to maximum bond stress per connector unit volume.
|
v3-fos-license
|
2019-03-16T13:13:59.889Z
|
2017-02-28T00:00:00.000
|
78820227
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.4172/2157-2518.1000285",
"pdf_hash": "3a05b3bbfec1df7dca7b95e152d2ca322bd530c2",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:791",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "9989e140ec03aef5a4e4e2e0a6f8470e7e453997",
"year": 2017
}
|
pes2o/s2orc
|
Pulmonary Hypertension in Children with Cancer
Background: The advances in medicine have led to improved survival rates for children diagnosed with cancer. Despite these improvements, late mortality rates for cancer survivors exceed those of the general population. Leading causes of death in this population include subsequent cancer, followed by pulmonary and cardiovascular events. Objective: To study the frequency of pulmonary hypertension in children with cancer after finishing their treatments, and to study the effects of different determined factors like age of diagnosis or type of treatment on the development of pulmonary hypertension. Patient and methods: A cross-sectional study was carried out to focus on the frequency of pulmonary hypertension in patients with cancer after finishing their treatment in Basra Children’s Specialty Hospital, pediatric oncology center over 6 months; from the 1st of October, 2014 till 31th of March 2015. A total of 67 patients were included in the study, their age ranged from 6 months to 16 years, with 41 males and 26 females. The collected patients have been evaluated for development of pulmonary hypertension by Echocardiograph device in same hospital. Results: Acute lymphoblastic leukemia accounts for the greatest percentage (34.3%), followed by Acute myeloid leukemia (15%) then Hodgkin`s lymphoma, (13.4%) and the rest are solid tumors (37.3%). pulmonary hypertension is no statistically significant in relation to type of cancer, (P=0.729). The age of patient at time of diagnoses is statistically significant affects the development of pulmonary hypertension; the frequency tends to occur more in patients who have been diagnosed before the age of five years compared to those diagnosed at age older than 5 years, (P=0.035) but the sex of patient is statistically no significant effect (P=0.773) while no relation to type of treatment with chemotherapy (Methotrexate) radiotherapy is statistically significant (P=0.04). The occurrence of pulmonary hypertension also affected by period after the treatment ,cardiovascular complication is more seen in patient who completed two years after finishing treatment with statistically significant association (P=0.036). Pulmonary hypertension occurred more in patients who exposed to radiation for chest, cervical and brain areas (above diaphragm) than those exposed to abdomen (below diaphragm), but statistically no significant (P=0.264). Route of administration of chemotherapy (Methotrexate) either oral or intravenous on the occurrence of pulmonary hypertension is statistically no significant (P=0.432). Conclusion: The pulmonary hypertension is one of adverse cardiovascular effects that develop in patients who exposed to radiation or certain types of chemotherapy (Methotrexate) so the patient radiotherapy or should have regular screening programs for cardiac functions after complete the course of therapy.
Introduction
Cancer in children is rare; only about 1% of new cancer cases [1] in the United States occur among children younger than 19 years of age [2]. Although advances in treatment have increased the overall 5-year survival rate for childhood cancers to approximately 80%, cancer is still the second leading cause of death (following accidents) in children aged 5 to 14 years with slightly increased rates in males and white children [3]. Hematopoietic tumors (leukemia, lymphoma) are the most common childhood cancers, followed by central nervous system (CNS) tumors and sarcomas of soft tissue and bone [2].
The risk of pulmonary conditions is more than three times higher in cancer survivors than in their siblings, as manifested by pulmonary signs (abnormal chest wall growth), symptoms (chronic cough, use of supplemental oxygen, exercise-induced shortness of breath), or specific diagnoses (lung fibrosis, recurrent pneumonia, pleurisy, bronchitis, recurrent sinus infection, or tonsillitis) [4]. Pulmonary fibrosis and pneumonitis are the best-described sequelae of cancer treatment during childhood. However, like most of the late effects of cancer therapy, pulmonary toxicity may first become apparent during the certain chemotherapies, such as bleomycin, can cause lung problems, (pneumonitis and fibrosis). People who get these treatments may have no noticeable symptoms, but for others, problems may start within the first few years after treatment. Diffuse alveolar damage (DAD) is a common pathological manifestation of drug-induced lung injury that results from necrosis of type II pneumocytes and alveolar endothelial cells [5]. Histopathologically, DAD is divided into an acute exudative phase and a late proliferative phase. The exudative phase, which is characterized by alveolar and interstitial edema and hyaline membranes, is most prominent in the 1st week after lung injury. The proliferative phase, which is characterized by proliferation of type II pneumocytes and interstitial fibrosis, typically occurs after 1 or 2 wks. Depending on the severity of the injury, fibrosis can improve significantly, remain stable or progress to honeycomb lung. Drugs that most commonly cause (DAD) are bleomycin, busulfan, carmustine, cyclophosphamide, melphalan, methotrexate and mitomycin [6] treatment and persist, or it may not appear until years later [6].
The lungs are particularly sensitive to radiation, and Radiationrelated complications such as pulmonary fibrosis and pneumonitis are most often seen in patients with malignant diseases of the chest, notably HL and solid tumors with pulmonary metastases such as in Wilms tumor and Ewing sarcoma. Asymptomatic abnormal radiographic findings or restrictive changes in pulmonary function testing have been reported in greater than 30% of patients who received radiation to the lungs. These changes can be detected months to years after completion of therapy, and are most prevalent in individuals with a history of pneumonitis as an acute complication of therapy [6].
Methods
A cross-sectional study was carried out on children with malignancy (hematological and solid tumor) from the 1st of October, 2014 till 31th of March 2015, who have been finished their treatment at Basrah Children's Specialty Hospital in Pediatric Oncology Center. A total of 67 patients (41 males and 26 females), aged 6 months to 16 years were included in the study.
There are many drugs have tendency to cause pulmonary hypertension, from these, two drugs are available and commonly use in Pediatric Oncology Center in Basrah, which are bleomycin and MTX. The bleomycin was excluded from the study because it use only in treatment of GCT and HL in pediatric age groups and it's pulmonary toxicity is dose-dependent and occurs at treatment doses greater than 400 units [7] and the pediatric dose of bleomycin not reach this dose. The MTX was included in the study because it's tendency to produce lung toxicity is not dose-related [8] and it use in pediatric age groups in treatment of many cancer which include NHL (MTX IV), APL (MTX PO) and ALL (MTX IV and PO). A special data sheet was designed for the purpose of the study (Appendix I). The following information were taken: name, sex, date of birth, past medical history, date of diagnosis, date of starting treatment, date of finishing treatment, date of evaluation, the type of tumor, exposure to radiotherapy, route of chemotherapy (MTX) and Echocardiograph findings. Patients enrolled in the study included patients who finished their treatment course, aged 6 mouths to 16 years, and had adequate information about treatment course. The patients who previously diagnosed with pulmonary hypertension or suffered from the following conditions (which have tendency to cause pulmonary hypertension) were excluded from the study. These conditions included (congenital heart diseases, valvular heart diseases, cardiomyopathies, sickle cell disease or anemia, thalassemia, lung disease, splenectomy, chronic renal failure and thyroid disease.
1.
Patients those received methotrexate in their treatment who subdivided into three groups: a. Patients who exposed to oral Methotrexate b. Patients who exposed to intravenous Methotrexate c. Patients who exposed to oral and intravenous Methotrexate 2.
Patients who received radiotherapy. Those patients were subdivided into two groups: a.
Patients received total body irradiation or to the area above diaphragm.
b. Patients received irradiation to the area below diaphragm.
Because there were deficiency in radiotherapy data for each patient in form of exact cumulative and fractionation doses, so the data was limited for therapeutic doses of irradiation to each type of cancer and for area of exposure as mention above. The diagnosis of pulmonary hypertension was based on examination by Echocardiograph device (PHILIPS, made in Saronno Italy 2003, Model No: MCMD02AA) in same hospital, and the patient considered to have pulmonary hypertension when mean pulmonary arterial pressure more than 25 mmHg [9].
Statistical analysis was done using Statistical Packages for Social Sciences SPSS program (version 18), data were expressed and comparisons of proportions was performed by crosstab using Chi-Square test when each cell has an expected frequency of five or more and use of Fisher's exact test when each cell has an expected frequency of less than five. The P-value of<0.05 was considered statistically significant.
Results
A total of (67) cases were included in this study, acute lymphoblastic leukemia comprised the highest percentage (34.3%) compared to other hematological malignancies (AML, APL) which represented (15%) while Hodgkin lymphoma and Wilm's tumor constituted (13.4%, 9%) respectively. The percentage of (7.5%) was for each non Hodgkin lymphoma and NB, while MB, RB, and RMS constituted the lowest percentage among the collected cases (4.5%) for each (Table 1). Table 2 shows that the males more than females cases, (41, 26) respectively. In both sex, the largest percentage of cases were among the age group (6-16 years) and it was (75.6%) for male and (88.5%) for female. There is no statistically significant difference concerning the distribution of malignancy in different age groups among male (P>0.05) and females cases (P>0.05) ( Table 3 shows that the PH were detected in (9%) of 67 estimated cases. The distribution of the (9%) of PH were as the following; (3%) for each ALL and HL and (1.5%) for each APL and Wilm's tumor. There is no statistically significant difference concerning the distribution of PH cases regarding the type of malignancy (Table 3). Table 4 shows the distribution of PH according to the age of diagnosis where (7.5%) of cases had PH in younger age groups (6 months-5 years) and (1.5%) of cases had PH in older age groups (6 -16 years). There is statistically significant difference concerning the distribution of PH cases in relation to age of diagnosis. The table shows that there is no statistically significant difference in regarding to sex of patients ( Table 5 shows the distribution of PH among patients who received MTX. The percentage of PH was (9%), (4.5%) appeared among patients who received MTX and the rest (4.5%) among those not received it with no statistically significant difference. The percentage of PH in patients who received radiotherapy was (6%) while the (3%) represented the cases with PH that were not treated by radiotherapy. There is statistically significant difference concerning the distribution of PH cases in relation to radiotherapy ( Table 6 shows the distribution of PH according to post treatment duration where all six cases (9%) that diagnosed with PH appeared in patients who had post treatment duration more than two years. There is statistically significant difference concerning the distribution of PH cases in relation to post treatment duration. The table also shows the distribution of PH in relation to the duration of oral MTX where (3.7%) represented the PH among patients treated with MTX for 12 months while (7.4%) was the percentage of PH in patients who received MTX for 30 months without statistically significant difference ( Table 7 shows the distribution of PH among patients received radiotherapy above and below diaphragm where the largest percentage of PH appeared among patient who received radiotherapy above diaphragm (15%). There is no statistically significant difference concerning the distribution of PH cases in relation to area of exposure to radiotherapy. The table also shows the distribution of PH in patients received same chemotherapy (MTX) by different routes; MTX IV (15.6%), MTX PO (12.5%), and mixed both routes (71.9%) where (3.1%) of PH appeared in patients with PO MTX, (6.3%) represented the PH among patients who received MTX by both routes (PO, IV). There is no statistically significant difference concerning the distribution of PH cases in relation to route of MTX administration ( Table 7).
Discussion
It is important to remember that only in 50 years ago the ability to diagnose and manage childhood cancer was rudimentary & survival was less than 10%, today more than 70% of children diagnosed with cancer survive and the majority is considered cured [10]. Despite that, the late adverse effects of treatment modalities should be considered during and after treatment courses. The pulmonary hypertension is one of adverse cardiovascular effects that develop in patients who exposed to radiation or certain types of chemotherapy (MTX).
The study shows that the percentage of PH among cases is (9%) which consider a high percentage regarding the occurrence of PH in general population and this findings similar to the results of study carried out by Mertens et al. [4] that shown the risk of pulmonary conditions was more than three times higher in cancer survivors than in their siblings. The study shows that there is no relation between the type of malignancy and development of PH, (P=0.729) which is statistically not significant. This finding is correlated with the fact that the lung metastasis and malignancies, which can cause interstitial lung disease and subsequently lead to PH, are rare in childhood [11].
In relation to age of diagnosis, the study deal with all pediatric age groups that can theatrically complete the shortest course of cancer therapy which is 6 months like treatment of RB, MB and Wilm's tumor. The age of diagnosis was divided into two groups; one of them includes infant, toddlers and preschool age group (6 months -5 years old) and the other involves school age group and adolescents (6-16 years). The study found out statistically significant association between the younger age group and the risk of PH (P=0.035). This finding is similar to study displayed by Lopez et al. [12] and other research was done by Miller et al. [13]. The well-defined relation between the radiation and younger age group is because the radiation for malignancy results in pulmonary fibrosis with loss of lung volume as in adult and older children, in addition to compromise the pulmonary function by inhibited growth of the supportive structures and the chest wall in younger children [14]. The study also shows that the development of PH among male and female cases have no statistically significant association between sex and PH (P=0.773). This finding corresponds to result of other study carried out by Rubin [15], which revealed that the development of PH not affected by gender of patient during childhood.
In case of MTX, the toxicity is dose independent, but differs by change the route of administration. The study shows no statistically significant association between the treatment by MTX and development of PH, (P=0.908). This result was properly due to the rarity of PH cases caused by MTX (incidence below 1%) [8] and due to a small numbers of patients who received these treatment. For the same reason of the small sample size, the development of PH reveals no statistically significant difference in patients received MTX by different ways of administration (P=0.432), but the difference in the percentage was similar to study carried out by Cottin et al. [16] who shown the relation between the oral low dose MTX and development of lung toxicity.
Regarding the cases who received radiotherapy in their treatment course, the development of PH appears more common among patients got radiation compared to that patients not received it. This finding reveals statistically significant association between exposure to radiotherapy and development of PH, (P=0.04). This result correlates with research was done by Weiner et al. [17] and other study carried out by Mertens et al. [4], who shown statistically significant associations were present for lung fibrosis and chest radiation (P=0.001).
The study also estimates the relation between the PH and the location of exposed area to radiation and finds that the occurrence of PH appears more among the patients exposed to radiation above diaphragm. Despite these obvious differences, the association is not statistically significant. The difference in the percentage of PH cases between two groups (above and below diaphragm) corresponds to results of others research was done by Liles et al. [18], which revealed that rick of lung injury increases proportionally with lung volume exposed to radiation, but the non-significant P-value may due to small size sample who received radiotherapy in this study.
The study considers two age groups depend on the period after completing treatment course, the first group with post treatment duration (2 years or less) and the second group with (more than 2 years). All the six cases which diagnosis with PH appear in patients who have more than 2 years post treatment duration with statistically significant association, (P=0.036). This finding is similar to study carried out by Rosiello et al. [19], which displayed that the time of clinical manifestations of pulmonary toxicity induced by anticancer therapy may delay up to two years after treatment completed. This explained by the histo-pathological phases that take place in lung when it exposed to chemotherapy. These phases included the exudative phase, which is most prominent in the 1st week after lung injury and the reparative phase, which typically occurs after 1 or 2 wks. Depending on the severity of the injury, fibrosis can improve significantly, remain stable, or progress to honeycomb lung [6]. Same things occur in radiotherapy, in which the patients may develop progressive pulmonary fibrosis, usually 6 to 24 months after treatment [20].
Conclusion
The study compared the duration of oral MTX among three groups of patients; APL with oral MTX for 12 mouths, ALL in female with oral MTX for 18 mouths and ALL in male with oral MTX for 30 mouths. The results shown no statistically significant association between the duration of oral MTX and development of PH, (P=0.393). This results were similar to other study carried out by Rossi et al., which shown that there is no correlation between the development of drug toxicity and the duration of MTX or its total cumulative dose [21].
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2010-07-21T00:00:00.000
|
958405
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://translational-medicine.biomedcentral.com/track/pdf/10.1186/1479-5876-8-69",
"pdf_hash": "1a580817fce6f5de267bfeeb4a622a345d2b9c18",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:792",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "0967475a13f48b847d19d92a70a7cbd19c6feb2e",
"year": 2010
}
|
pes2o/s2orc
|
Evaluation of normalization methods for two-channel microRNA microarrays
Background MiR arrays distinguish themselves from gene expression arrays by their more limited number of probes, and the shorter and less flexible sequence in probe design. Robust data processing and analysis methods tailored to the unique characteristics of miR arrays are greatly needed. Assumptions underlying commonly used normalization methods for gene expression microarrays containing tens of thousands or more probes may not hold for miR microarrays. Findings from previous studies have sometimes been inconclusive or contradictory. Further studies to determine optimal normalization methods for miR microarrays are needed. Methods We evaluated many different normalization methods for data generated with a custom-made two channel miR microarray using two data sets that have technical replicates from several different cell lines. The impact of each normalization method was examined on both within miR error variance (between replicate arrays) and between miR variance to determine which normalization methods minimized differences between replicate samples while preserving differences between biologically distinct miRs. Results Lowess normalization generally did not perform as well as the other methods, and quantile normalization based on an invariant set showed the best performance in many cases unless restricted to a very small invariant set. Global median and global mean methods performed reasonably well in both data sets and have the advantage of computational simplicity. Conclusions Researchers need to consider carefully which assumptions underlying the different normalization methods appear most reasonable for their experimental setting and possibly consider more than one normalization approach to determine the sensitivity of their results to normalization method used.
Background
MicroRNAs (miRs) are a class of short, highly conserved non-coding RNAs known to play important roles in numerous developmental processes. MiRs regulate gene expression through incomplete base-pairing to a complementary sequence in the 3′ untranslated region (3′ UTR) of a target mRNA, resulting in translational repression and, to a lesser extent, accelerated turnover of the target transcript [1]. Recently, the dysregulation of miRs has been linked to cancer initiation and progression [2], indicating that miRs may play roles as tumor suppressor genes or oncogenes [3]. There is also mounting evidence that miRs are important in development timing [4,5], cell differentiation [6], cell cycle control and apoptosis [7]. The involvement of miRs in those biological functions suggests their intrinsic roles in maintaining homeostasis or contributing to pathological processes.
Technologies utilized for relative quantification of miR expression include Northern blot, real time PCR, in situ hybridization, sequence analysis and array-based profiling [8]. Due to the limited throughput of other technologies, microarray-based miR profiling has become a popular method for interrogation of miRs, especially when the contributions of specific miRs to a given condition or process remain elusive. However, miR arrays distinguish themselves from gene expression arrays by their more limited number of probes, and the shorter and less flexible sequence in probe design. Robust data processing and analysis methods tailored to the unique characteristics of miR arrays are greatly needed.
Normalization is a key early step in miR microarray data processing. Normalization methods are aimed at removing data artifacts resulting from systematic or random technical variation. If not removed, these artifacts might affect subsequent data analyses, such as class comparison and class prediction. Assumptions underlying commonly used normalization methods for gene expression microarrays containing tens of thousands or more probes may not hold for miR microarrays. Further studies to determine optimal normalization methods for miR microarrays are needed. The best normalization method may differ depending on whether the miR chip uses a one-channel or two-channel system. In a one channel system, single samples are labeled and hybridized to individual arrays. For arrays using a two-channel system, generally two samples are separately labeled, mixed, and hybridized together to each array. The most commonly used design for a two-channel system is called the reference design. One of the samples is used as an internal standard so that the signal intensity which reflects the amount of hybridization to a probe for a sample of interest is measured relative to the intensity for the same probe on the same array for the reference sample [9].
Several papers comparing miR microarray normalization methods have been published; however, the results and recommendations are not consistent. Rao et al [10] compared normalization methods for single channel miR microarray data. They reported that quantile normalization was the best performing method for reducing the differences in microRNA expression values among replicate tissue samples. Pradervand et al. [11] confirmed that quantile normalization was the most robust normalization method for their set of invariant miRs using the Agilent single channel platform. In contrast, Hua et al. [12], using Rt-PCR as a gold standard, found that the lowess method gave the best result for twochannel miR microarray data, although the differences among their top performing methods were minimal. However, the suitability of Rt-PCR as a comparator for miR microarray expression results has been questioned [8,13], and the stability of lowess smoothers is known to be dependent on the number of data points to which they are applied. Sarkar et al. [14] reported quality assessment for two-channel miR expression arrays, and they found that all normalization methods performed adequately in their study.
Here we report our evaluation of many different normalization methods on a custom-made two channel miR microarray. Our study examined technical replicates from a large number of different cell lines to determine which normalization methods minimized differences between replicate samples while preserving differences between biologically distinct miRs.
Cell line culture
Ten lung carcinoma cell lines from the NCI60 panel were obtained from the National Cancer Institute's Developmental Therapeutics Program (DTP), and 9 renal cell carcinoma cell lines were generated at the Surgery Branch, National Cancer Institute, National Institutes of Health (NIH). All cell lines were cultured in complete RPMI media supplemented with 10% FBS, 1 mM HEPES, 1 mM Ciprofloxacin and L-glutamine/ penicillin/streptomycin. All cells were cultured at 37°C under 5% CO 2 . Cells were harvested at sub-confluent condition by trypsin-versene (Invitrogen) detachment and centrifugation after 3-5 days in culture. A single EBV cell line used as the reference sample was cultured in the same media in suspensional growth cells and harvested by centrifugation at 1200 rpm for 5 min after one week of culture. Cell pellets were immediately lysed in Trizol at 1-2e7 cell per ml of Trizol.
RNA isolation and labeling
Total RNA from 10 lung carcinoma cell lines and 9 renal cell carcinoma cell lines were isolated using Trizol reagent. Small RNA in total RNA samples were enriched and purified by flashPAGE Fractionator (Ambion, Austin, TX USA) according to the manufacturer's instruction. The reference sample consisting of one EBV cell line was processed following identical procedures. After small RNA purification, small RNA from test samples and EBV reference samples, equivalent to 10 μg of the total RNA, were labeled with Cy5 and Cy3, respectively, using mirVana™ miRNA Labeling Kit (Ambion, Austin, TX USA).
Microarray fabrication and quality control procedures
A custom-made oligo array including 714 human, mammalian and viral mature antisense miRs (mirbase: http:// microrna.sanger.ac.uk/, version9.1) plus 2 internal controls with 7 serial dilutions [2,6,15] were printed at Infectious Disease and Immunogenetics Section, Department of Transfusion Medicine, Clinical Center, NIH. The antisense miR oligo probes were 5′ amine modified and immobilized in duplicate (two spots per miR per array) on CodeLink activated slides (GE Health, NJ, USA) via covalent binding. Serially diluted control probes were used as indicators of labeling efficiency, optimization of intensity saturation, and intensity balance of test vs. reference sample. A single large labeling reaction of the EBV reference samples was used for all arrays. Strong and positive EBV-miR hybridization also functioned as a positive control quality assessment of the reference sample.
Sample hybridization and image analysis
Equal amounts of labeled test and reference samples were cohybridized on the custom made miR oligo microarray for more than 14 hours at room temperature. After washing, the array was scanned using a Gen-ePix 4B scanner. Any spot smaller than 25 pixels was filtered out and excluded from remaining analyses. If both channels produced intensities less than 100 for a given microRNA, that spot was also filtered out. For spots with one channel intensity less than 100 but the other channel intensity 100 or greater, the signal less than 100 was set to 100 prior to calculation of the signal ratio. The intensity ratio for each spot was then calculated as the red signal intensity (test samples) divided by the green channel signal intensity (EBV reference samples). Both single channel intensities and intensity ratios were log transformed (base 2) for normalization and further analyses. Overall, 9 out of 10 lung carcinoma cell lines and all 9 renal cell carcinoma cell lines have duplicate samples while one lung carcinoma cell lines has quadruplicate samples.
1) Median
This normalization method uses the global median of log intensity ratios on each chip as the normalization factor. The global median log intensity ratio is calculated across all spots on the chip, and then this value is subtracted from the log intensity ratio for each spot. The global median of the normalized log intensity ratios equals zero.
2) Mean
This normalization method uses the global mean of log intensity ratios on each chip as the normalization factor. The global mean log intensity ratio is calculated across all spots on the chip, and then this value is subtracted from the log intensity ratio for each spot. The global mean of the normalized log intensity ratios equals zero.
3) Trimmed Mean
This normalization method is similar to the mean normalization method except that a trimmed mean of log intensity ratios on each chip is used as the normalization factor in place of the overall mean. A trimmed mean is calculated by discarding a certain percentage of the lowest and the highest log intensity ratios and then computing the mean of the remaining log intensity values. It is less susceptible to the effects of extreme values. In our experiments, we used a trimming percentage of 1% from both the lowest and highest data values.
4) Lowess
Lowess normalization assumes that the dye bias might be dependent on spot intensity. Let (logG, logR) be the green and red background-corrected log intensities. Then, (M, A) are defined by M = log(R/ G) and A R G = 1 2 log( ) . Note that M is the unnormalized log ratio. The adjusted log ratio for the jth miR is computed by: M j *(A j ) = M j -c(A j ), where c(A j ) is the lowess curve fit to the MA plot. For the calculations presented in this paper, the lowess curve was calculated using the R function loess with a span set at 0.5 [16]. 5) Quantile-quantile Quantile normalization [17] assumes that the distribution of miR abundances is nearly the same in all samples. For convenience, an artificial reference chip is created by pooling intensities across all chips in the experiment to produce an intensity reference distribution. This reference distribution is described by a distribution function F 2 . To normalize each chip, the distribution of miR intensities for that chip (e.g. denoted by the distribution function F 1 ) is transformed to equal the reference intensity distribution. Operationally, this transformation is accomplished by determining for each signal intensity on the chip its quantile in the chip's intensity distribution and replacing that value with the value having that quantile in the reference distribution. In a formula, the transform is x norm = F 2 -1 (F 1 (x)), where F 1 is the distribution function of the actual chip, and F 2 is the distribution function of the reference chip. 6) Invariant set option Sometimes the normalization factors or curves calculated as described above are derived using only an invariant subset of the probes (e.g., miRs). The notion of invariant set normalization was first introduced for Affymetrix gene expression chips [18], but it can be generalized to miR arrays. This method assumes that there is a set of reference miRs that are invariant across a set of samples. Rather than requiring a priori specification of a standard set of "housekeeping miRs", the invariant set is determined empirically. The invariant probes are identified by determining those probes which have most similar rank order across all arrays as measured by the smallest variance of ranks. There is some arbitrariness in deciding what percentage of the probes belong in the invariant set, so in our study we considered several possible percentages, including 10%, 20%, 30% and 40% of the probes with the smallest variance to serve as the "invariant set". Normalization methods 1) to 5) were then reapplied based on the defined invariant sets of miRs. The invariant set of miRs including 40% of the probes with smallest variance was used only for the quantile normalization method. The shorthand notation used to indicate the various normalization methods is the name of the main approach (Median, Mean, trimmed Mean, Lowess, or Quantile) with a suffix indicating the size of the invariant set used, if any (.10,.20,.30,.40). No suffix indicates that the full set of miRs was used.
Measures of variation
We examined the impact of each normalization method on both within miR error variance (between replicate arrays) and between miR variance. This analysis was based on a components of variance model: where Y ij denotes the log transformed intensity ratio of ith miR in the jth replicate. The error variance component σ e 2 associated with e ij (technical error) represents the reproducibility of the method. The variance component σ m 2 associated with m i (true miR expression) represents the true miR-to-miR variability. Formulas for computing the variance components and intra-class correlation based on method-of-moments estimation for each cell line under each normalization method can be computed as in Korn et al. [19]. The error variance (within-miR) variance component is estimated by / ( ) and it estimates the proportion of the total variance (sum of within and between miR variances) due to the between miR variance. It is desirable for the ICC to be large (close to one), indicating that the technical error variance is relatively small compared to biological differences between miRs [19]. When the error variance is fairly high, it is possible for the estimated ICC to be negative due to use of method-of-moments estimation, especially when the number of technical replicates is small. The advantage of the method-of-moments estimators is that they are unbiased and simple to compute.
Statistical tests for differences in ICC between normalization methods
We examined the following normalization methods: no normalization, mean, median, trimmed mean, lowess and quantile normalization based on all miRs (N = 6 normalization methods); based on the three invariant sets defined above for the mean, median, trimmed mean, and lowess methods (N = 12); and based on four invariant sets for the quantile method (N = 4). For each of these normalization methods, there were 19 ICC values computed, corresponding to 10 lung cancer cell lines and 9 renal cancer cell lines. Separately for the lung cancer cell lines and the renal cancer cell lines, Wilcoxon signed-rank tests were applied to the ICC for each of the 231 possible pairings of these methods. Two methods were considered statistically significantly different if the 2-sided p-value from the signed-rank test was less than α = 0.01. This α level was chosen so that the expected number of false positive differences would be no more than 3 among the 231 paired tests for each of the two cell line experiments.
Results
The ICCs for different normalization methods using the ten lung cancer cell lines ranged from -0.30 to 0.87 (see Table 1, 2 and Figure 1). The quantile normalization methods based on invariant sets were observed to produce the highest mean ICCs across the ten lung cancer cell lines (mean ICC > 0.60, for all invariant set sizes 10-40%). The worst performing methods were the lowess methods when based on invariant sets (mean ICC < 0.50). For all pairwise comparisons of invariant set quantile normalization versus invariant set lowess normalization, the distribution of ICCs was significantly lower for the lowess-based methods compared to the quantile-based methods (P < 0.01 for all pairs, Wilcoxon signed rank tests). Cell line effects were also apparent, with the lowest average ICC observed for cell line 1 (mean ICC = 0.02, empty blue circle in Figure 1) and the highest average ICC observed for cell line 3 (mean ICC = 0.84, empty green square in Figure 1). When using the full data set (not restricting to an invariant set), global mean, global trimmed-mean, and global median performed about equally well, although those ICCs were somewhat lower than the ICCs for the quantile-based methods using invariant sets. With the exception of the lowess methods and methods using small invariant sets (e.g., 10%), performing some type of normalization generally produced higher ICCs than performing no normalization.
The ICCs for different normalization methods for the experiment involving nine renal cancer cell lines ranged from 0.66 to 0.96 (see Table 3, 4 and Figure 2). Overall, the ICCs were higher for the renal cell lines than for the lung cancer cell lines, likely due to the more controlled setting in which the renal cancer cell lines were processed, although it is possible that biological differences between the lung and renal cell lines could also partly explain the findings. The entire set of renal cancer cell line experiments was performed in one flash page batch by one technician, in contrast to the lung cancer cell line experiments, which were processed in several batches. When using the full set of miRs for normalization, the mean, trimmed mean, and median normalization methods all produced similarly high ICCs. As was observed for the lung cancer cell line experiments, the lowess methods based on invariant sets tended to produce lower ICCs and the quantile methods based on invariant sets tended to produce higher ICCs. Comparing invariant set quantile normalization to invariant set lowess normalization, ICCs were always observed to be lower for the lowess-based methods compared to the quantile-based methods with the pairwise differences reaching statistical significance for most pairs (P < 0.01 for most pairs, Wilcoxon signed rank tests) [Additional
Discussion
Data normalization is an important step in the analysis of microarray data. We explored a comprehensive collection of normalization methods in miR microarray experiments using lung cancer cell lines and renal cancer cell lines to address the question of which normalization methods might be most appropriate for miR microarray data. We tested global mean, trimmed mean, global median, lowess, and quantile-quantile methods and examined the impact of using each of these methods restricted to an empirically determined invariant miR set. We found that for our data sets, lowess normalization generally did not perform as well as the other methods. For the lung cancer cell lines quantile normalization applied to an invariant set was best on average unless restricted to a very small invariant set (e.g., 10%). Quantile normalization with invariant set also performed well for the renal cancer cell lines, but average observed ICCs were slightly higher for global median and mean methods. The good performance of quantile normalization restricted to an invariant miR set observed in our study is consistent with a previous study reported for a one channel miR chip [11]. Global median and global mean methods performed reasonably well in both data sets and have the advantage of computational simplicity. Although many different normalization methods have been used for gene expression microarray data, there may be characteristics of miR expression that will influence the optimal choice of normalization method for miR microarray data. The number of probes on a miR microarray is typically much smaller (a few hundred or less) than the number of probes on a gene expression cDNA microarray (usually tens of thousands), and the expected proportion of differentially expressed miRs comparing across samples in a miR microarray experiment might be higher than the proportion of differentially expressed genes typically expected for gene expression microarray studies. It may be difficult to anticipate what percentage of miRs are likely to be truly invariant across a set of samples used in an experiment, so ad hoc decisions may have to be made for the invariant set size to be used for normalization methods that use invariant sets. Our results suggested that using an invariant set consisting of only 10% of the miRs resulted in diminished performance compared to methods using larger invariant sets, but the appropriate invariant set size obviously could depend on the particular experimental setting. Global mean and median methods require assumptions that either the number of differentially expressed miRs is not too large or that the amount of over-expression and under-expression of miRs within each sample is somehow balanced so that the mean or median is still a reasonable indicator of overall shift in expression level due to technical factors. Researchers still need to consider carefully which assumptions underlying the different normalization methods appear most reasonable for their experimental setting and possibly consider more than one normalization approach to determine the sensitivity of their results to normalization method used.
Additional material
Additional file 1: Table presenting p-values resulting from Wilcoxon signed-rank tests used to compare ICCs of different normalization methods applied to data obtained by miR microarray analysis of 10 lung cancer cell lines.
Additional file 2: Table presenting p-values resulting from Wilcoxon signed-rank tests used to compare ICCs of different normalization methods applied to data obtained by miR microarray analysis of 9 renal cancer cell lines.
Author details
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2015-12-03T00:00:00.000
|
15208534
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=17818&path[]=6454",
"pdf_hash": "dcbaf04ab69a4d33c20cdfef73c4dbe41ef5b50f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:796",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "dcbaf04ab69a4d33c20cdfef73c4dbe41ef5b50f",
"year": 2015
}
|
pes2o/s2orc
|
Fra-1 is a key driver of colon cancer metastasis and a Fra-1 classifier predicts disease-free survival.
Fra-1 (Fos-related antigen-1) is a member of the AP-1 (activator protein-1) family of transcription factors. We previously showed that Fra-1 is necessary for breast cancer cells to metastasize in vivo, and that a classifier comprising genes that are expressed in a Fra-1-dependent fashion can predict breast cancer outcome. Here, we show that Fra-1 plays an important role also in colon cancer progression. Whereas Fra-1 depletion does not affect 2D proliferation of human colon cancer cells, it impairs growth in soft agar and in suspension. Consistently, subcutaneous tumors formed by Fra-1-depleted colon cancer cells are three times smaller than those produced by control cells. Most remarkably, when injected intravenously, Fra-1 depletion causes a 200-fold reduction in tumor burden. Moreover, a Fra-1 classifier generated by comparing RNA profiles of parental and Fra-1-depleted colon cancer cells can predict the prognosis of colon cancer patients. Functional pathway analysis revealed Wnt as one of the central pathways in the classifier, suggesting a possible mechanism of Fra-1 function in colon cancer metastasis. Our results demonstrate that Fra-1 is an important determinant of the metastatic potential of human colon cancer cells, and that the Fra-1 classifier can be used as a prognostic predictor in colon cancer patients.
IntroductIon
Metastasis is the main reason for many solid tumors to be life-threatening. The metastatic cascade involves several steps, ranging from dissemination from the primary tumor to growth at a secondary site. The acquisition of metastatic capability by tumor cells can be associated with Epithelial-Mesenchymal Transition (EMT). Upon EMT, tumor cells are able to invade through the basement membrane of the primary tissue and stroma, and to enter the blood circulation. They often become anoikis resistant, which allows them to survive in the absence of attachment. Finally, they associate with the endothelium and extravasate to a secondary tissue. For outgrowth at secondary sites, the newly formed tumor foci need to induce angiogenesis [1,2]. Metastases are often difficult to cure because they can be widespread, affecting tissue function, and they are usually resistant to conventional therapies. Furthermore, intervention of metastatic cancer progression is rarely efficient due to lack of early detection methods. Therefore, it is crucial to predict metastatic potential of disease and to target metastasis.
One of the well-known regulators of metastasis is the Activator Protein 1 (AP-1) complex. AP-1 is a family of transcription factors regulating a broad spectrum of cellular processes including proliferation, migration and invasion [3]. AP-1 dimers are formed by Fos (c-FOS, FOSB, Fra-1, Fra-2), Jun (c-JUN, JUNB, JUND), ATF and MAF protein families. AP-1 members are encoded by immediate early genes that are rapidly activated and deactivated in response to a wide range of stimuli. Although some AP-1 components have been reported to act as tumor suppressors, AP-1 complexes are mostly known for their ability to induce oncogenic transformation among other processes such as proliferation, apoptosis, invasion and angiogenesis [4]. c-Fos, c-Jun and Fra-1 are among the AP-1 components whose overexpression correlate with poor prognosis in several types of malignancies including ovarian, lung, and breast cancers [5][6][7].
AP-1 is regulated by the Ras/Raf/MEK/ERK [8], [9] and the Wnt [10] pathways. The Wnt pathway is often deregulated in colon cancer as a result of activating mutations in beta-catenin (CTNNB1) or inactivating mutations in adenomatous polyposis coli (APC), which is a negative regulator of beta-catenin. Wnt signaling is not only critical for developmental and oncogenic characteristics like proliferation, survival, and differentiation but also drives metastasis-related processes such as migration and cell polarity [11]. Previous reports have shown that the Wnt pathway negatively regulates Fos and FosB expression, whereas it increases Fra-1 mRNA levels in mouse epithelial cells [12]. Moreover, non-canonical Wnt signaling activates AP-1 through TCF binding to c-Jun in human colon cancer cells [13].
Fra-1 is one of the AP-1 transcription factors; it lacks a transactivation domain and has therefore a weak transforming activity. It forms heterodimers with Jun family members in order to activate target gene transcription. We and others have shown that Fra-1 promotes metastasis through various molecules: ADORA2B [7] in breast cancer, MMPs in breast cancer [14] and in lung epithelial cells [15], CD44 in mesothelioma [16], AXL in bladder cancer [17], FAK and EZH2 in colon cancer [18], [19].
Colorectal cancer (CRC) is among the most common cancers and one of the leading causes of cancerrelated deaths worldwide. Traditional classification divides CRC into four main stages based on the local extent of the tumor, with three subtypes of stage III tumors based on the number of cancer-positive nodes [20]. However, CRC is more heterogeneous than the categories used in the clinic with regard to progression, recurrence, metastasis and therapy response [21]. In the present study, we investigated the role of Fra-1 in colon cancer progression in vivo and the clinical impact of Fra-1 on disease outcome.
Fra-1 is not critically required for proliferation of colon cancer cells in vitro
As we have previously shown that Fra-1 is largely dispensable for human breast cancer cell growth in vitro but crucial for their ability to metastasize in vivo [7], we decided to investigate whether Fra-1 has a similar role in human colon cancer. Fra-1 was stably depleted in HT29, HCT116 and DLD-1 cells by two independent shRNAs ( Figure 1A). There was no difference in proliferation rates of Fra-1-depleted cells and control cells on 2D culture plates ( Figure 1B). However, we found a 30-50% decrease in the number of cells surviving under anoikisinducing conditions ( Figure 1C) and a three-fold decrease in the number of colonies formed by Fra-1 deficient HT29 cells in soft agar ( Figure 1D). Colo205 cells, which have low endogenous Fra-1 expression levels, successfully formed colonies in soft agar and survived in anoikisinducing conditions; Fra-1 overexpression caused a mild but significant increase in the number cells in both cases (Suppl. Fig. 1A-1C). Thus, Fra-1 is neither critically required for 2D nor 3D proliferation in vitro.
Fra-1 is largely dispensable for primary colon tumor growth in vivo
In order to assess the role of Fra-1 in in vivo tumor growth, we next injected control and Fra-1-depleted HT29 cells subcutaneously into severely immune-compromised (NOD/SCID IL2gamma, NSG) mice. Fra-1-depleted tumors grew approximately two-fold slower than control tumors (Figure 2A-2B). Immunohistochemistry staining and western blots showed that Fra-1 levels were still low in these tumors at the end of the experiment ( Figure 2C-D), indicating that there is no selective pressure to lose Fra-1 shRNAs during tumor progression. These data show that although Fra-1 contributes somewhat to the expansion of colon cancer tumors in vivo, it is not strictly required.
Fra-1 is crucial for efficient metastatic spread of colon cancer cells
We and others have implicated Fra-1 as an important determinant of the metastatic capacity of cancer cells, which is associated with its ability to induce EMT and with clinical outcome [7,22]. In order to determine the role of Fra-1 in colon cancer metastasis in vivo, we injected Fra-1-depleted HT29 cells intravenously into NSG mice and monitored tumor expansion in time via a luciferasedependent non-invasive in vivo imaging system. Whereas mice injected with cells carrying a control construct showed a substantial number of tumor foci distributed all over the body, tumor burden was sharply reduced in mice injected with Fra-1-depleted cells ( Figure 3A-3B). 29 days after injection, control mice had a saturated luciferase signal accompanied by severe weight loss (Suppl. Fig. 2A). At this time point, the average difference between control mice and mice injected with Fra-1-depleted cells was 206-fold.
At autopsy, multiple macroscopic tumors were observed on the subcutaneous skin and peritoneal wall as well as several organs of the control mice such as lung, spine, kidneys, ovaries, lymph nodes, skin and muscles in the extremities (Suppl. Fig. 2B). Immunohistochemical staining further showed foci in the liver, bones and brain. Much fewer tumors were observed in the mice injected with Fra-1-depleted cells, both macroscopically and by immunostaining ( Figure 3C-3D). Importantly, and in contrast to our observations for primary tumor growth, in a great majority of the cases, the tumor foci formed by Fra-1-depleted cells were positive for Fra-1, sometimes in a heterogeneous fashion (Suppl. Fig. 2C, 2D, 2E). We observed a similar pattern in HCT116 cells, which metastasize preferentially to the liver: Fra-1-depleted HCT116 cells formed significantly fewer and smaller foci in the liver upon intravenous injection (Suppl. Fig. 1D). Together, these results demonstrate that Fra-1 is critical for the metastatic spread of colon cancer cells in vivo, yet expendable for primary tumor growth.
Acute Fra-1 depletion impairs growth of established metastatic foci
The results obtained with cells lacking Fra-1 expression suggest an important contribution of Fra-1 to the metastatic potential of colon cancer cells. From a clinical point of view, it would be more relevant to determine the impact of Fra-1 depletion on tumors that have already been established, rather than to prevent outgrowth. Therefore, we decided to investigate whether acute loss of Fra-1 affects the growth of established tumor foci. This system also allowed us to exclude the potential bias where one group of cells may not survive the injection procedure or the mechanical stress caused by the blood circulation. We used an inducible tet-on system enabling us to deplete Fra-1 on doxycycline administration via the drinking water of the mice (Suppl. Fig 3A). To ensure a homogeneous level of downregulation of Fra-1 upon doxycycline treatment, we generated a cell clone (HT29-C25) harboring the tet-on construct. The mice were injected intravenously with control or HT29-C25 cells and each group was randomized into two sub-groups at the day of injection. One group continuously received doxycycline table1: cox proportional hazards model estimating hazard ratios for disease-free survival for the subtypes stratified for gender and tumor stage Table2: KEGG pathway analysis on the classifier genes reveals Wnt pathway as one of the significantly regulated pathways by Fra-1.
in drinking water from day 0 onwards after the inoculation of tumor cells, whereas the other was mock-treated. The total tumor burden was reduced 56-times in HT29-C25injected mice upon doxycycline treatment ( Figure 4A), whereas mice injected with control cells had no difference in luciferase signal until the end of the experiment (Suppl. Fig. 3B). This result indicates that when Fra-1 knockdown is induced after the initial seeding of tumor cells upon intravenous inoculation, Fra-1 is required for tumor outgrowth.
Next, Fra-1 depletion was induced when the tumor burden started increasing (after an initial drop as judged by luciferase imaging). Twenty days after treatment, Fra-1 depletion caused an eight-fold reduction in tumor burden ( Figure 4B). Once again, the rate of tumor development was the same in the mice injected with control cells regardless of doxycycline treatment (Suppl. Fig. 3C). Similar to the previous experiment ( Figure 3), mice injected with control cells developed tumors in a broad range of organs, but there were very few macroscopically detectable tumors in the HT29-C25injected mice on doxycycline treatment. Also similar to the previous experiment, tumors harvested at the end of the experiment showed varying levels of Fra-1 suggesting that some tumors were formed by Fra-1-proficient cells (Suppl. Fig. 3D). As assessed by the luciferase signal, doxycycline-treated mice have approximately five times less burden in their lungs compared to mock-treated mice transduction of two independent shRNAs. b. HT29 and HCT116 cells with or without Fra-1 knockdown were seeded into 6-well plates (30000 cells/well). The plates were stained with crystal violet after 7 days. c. HT29 cells with or without Fra-1 knockdown were seeded in duplicates in 0,3% agar suspension on top of a 1% agar base in 6-well plates at 24000 cells/well. After three weeks, colonies were stained with crystal violet and counted by Image J software (n=3). d. 0.4*10 6 cells were seeded into 6-well ultra-low-attachment plates in duplicate. The cells were harvested at day 6, trypsinized, resuspended and counted. Results presented are the combination of three experiments. Error bars represent SEM. Statistics: One-Way ANOVA. * p < 0.05, ** p < 0.01, *** p < 0.001.
( Figure 4C-4D). These mice had not only fewer but also smaller tumor foci in their lungs ( Figure 4E). Altogether, these data suggest that Fra-1 is essential also for growth and expansion of established (micro)metastases of colon cancer cells.
Fra-1-regulated gene signature is a prognostic classifier in colon cancer
Based on these findings, which are consistent with, and extend, those of others [18], [23], [24], Fra-1 acts as an important pro-metastatic factor in colon cancer. Since metastatic relapse is a major reason of cancer-related deaths, we asked whether we could stratify colon cancer patients based on Fra-1 expression levels, similar to what we have shown recently for breast cancer [7]. The prognostic value of Fra-1 was assessed by correlating FOSL1 mRNA levels (encoding Fra-1) in colon cancer patient samples to disease-free survival in five gene expression datasets. We observed that patients with tumors showing FOSL1 expression higher than median levels had a significantly worse prognosis in the first five years after treatment or surgery ( Figure 5A).
However, Fra-1 is not an ideal drug target due to the absence of a catalytic site that can be readily targeted by a small molecule. The lack of an available inhibitor against Fra-1 prompted us to search for critical downstream targets of Fra-1 that are involved in metastasis. We compared the expression profiles of control and Fra-1-depleted HT29 cells by RNA sequencing and selected the genes that are significantly regulated by Fra-1. This classifier contains a total of 199 genes, 88 of which are positively regulated by Fra-1 and 111 negatively (Suppl. Table 1).
According to non-negative matrix factorization (NMF) analysis, colon cancer patients could be divided into three prognostic groups based on the expression levels of Fra-1-regulated genes. The heat map demonstrates that the genes that were positively regulated by Fra-1 are overexpressed in patients in subtype 1 and not in subtypes 2 and 3. On the other hand, genes that were negatively regulated by Fra-1 have lower expression levels in patients in cluster 1 ( Figure 5B), independently of tumor stage or dataset (Suppl. Fig. 4). A Kaplan-Meier analysis showed that subtypes 2 and 3 are good prognosis groups. They only slightly differ from each other in the initial survival rates but in the long term have a similarly good prognosis. Subtype 1, on the other hand, has a significantly worse disease-free survival compared to the other two groups as well as poorer disease-specific and overall survival ( Figure 5C, Suppl. Fig. 5). In a Cox proportional hazards model stratified for gender and stage, subtypes 2 and 3 showed significantly better disease-free survival (HR = 0.43, p = 4.42*10 -5 and HR = 0.51, p = 0.001) than subtype 1. Analyzing each stage separately, we found similar effects for each stage albeit with different effect size (Table 1). A comparable pattern was observed with disease-specific and overall survival analysis with the exception of stage 3 patients in subtype 3 in case of overall survival (Suppl. Table 2a, 2b). These data suggest that overexpression of genes positively regulated by Fra-1 is correlated with poor outcome, whereas the expression of genes negatively regulated by Fra-1 is associated with better outcome.
Therefore, our Fra-1 classifier has prognostic power to predict the clinical outcome of colon cancer patients.
Fra-1 regulates the Wnt pathway
The influence of focal adhesions on motility and invasiveness and focal adhesion pathway regulation by Fra-1 in colon cancer cells are previously reported mechanisms of the pro-metastatic activity of Fra-1 [18], [25]. Consistently, Fra-1 knockdown in colon cancer cells decreased the expression of a panel of focal adhesion genes, indicating that our classifier is relevant and a reliable indicator of the aggressiveness of colon cancer ( Figure 6A).
On the other hand, regulation of the Wnt pathway by Fra-1 is an unexplored phenomenon. The Wnt pathway is significantly represented by seven genes in the classifier: whereas Wnt10A, SMAD3, DKK1 and DVL1 were downregulated upon Fra-1 knockdown, BAMBI, ROCK2 and PLCB4 were upregulated (Table 2). Notably, Wnt10A is the most abundantly expressed Wnt gene in HT29 cells Fig. 6). We validated Wnt10A, SMAD3, DKK1 and DVL1 dowregulation by Fra-1 depletion in HT29, and Wnt10A and DVL1 in HCT15 cells ( Figure 6B-6C). We also examined by a luciferase reporter assay whether Fra-1 depletion modified beta-catenin activity. HT29, HCT15 and DLD-1 cells with or without a Fra-1 knockdown were transfected with the TOP/FOP constructs to measure transcriptional activity of beta-catenin upon loss of Fra-1 expression. We observed an effective reduction in the beta-catenin-mediated transcription between control and Fra-1-depleted cells (Figure 7). These data suggest that Fra-1 regulates the canonical Wnt signaling by modulating the expression of Wnt pathway components and the transcriptional activity of beta-catenin.
dIscussIon
The high lethality rate of colon cancer is mainly due to recurrence and distant metastasis. It therefore is crucial to better understand and predict these outcomes in order to take appropriate action with regard to treatment options. In this report, we demonstrate that Fra-1 is a critical biological determinant of colon cancer metastasis, as judged by two main observations. First, Fra-1 depletion severely impaired metastatic foci formation of colon cancer cells in vivo. Second, gene expression analysis by RNA sequencing of metastatic colon cancer cells revealed that a Fra-1 classifier comprising genes significantly regulated by Fra-1 is a strong predictor of disease-free
survival.
Others have previously shown that Fra-1 is responsible for migration of colon cancer cells in vitro [18]. We found that Fra-1 is critical for the metastatic spread of colon cancer cells, even after establishment, yet largely dispensable for primary tumor growth. The growth rate of subcutaneous xenografts mirrors the results of 3D colony formation assays in vitro, showing a significant three-fold decrease in growth and colony number, respectively. These results, combined with the fact that Fra-1 knockdown was retained until the end of the experiment, indicate that Fra-1 is not critically required for primary tumor growth, since the tumors still grow in the absence of Fra-1. This differs, for example, based on FOSL1 expression. Samples were split according to lower or higher than average expression of FOSL1. Patients with low expression of FOSL1 exhibited significantly longer DFS than patients with higher expression. b. Heat map of the gene expression of the Fra-1 signature. Gene expression is shown as color gradient from blue (low expression) to yellow (high expression). The color bar on the left side indicates direction of regulation in Fra-1 knock-down cells. Color bars on top of the heat map show sample stage, source data set and Fra-1KD signature cluster, in this order. c. Disease-free survival curve for the three subtypes resulting from hierarchical clustering with the Fra-1 signature. Subtype 1 has significantly shorter DFS than subtypes 2 and 3, which show no survival difference. from our recent observations for DDR kinases, for which shRNAs were commonly lost during tumor expansion [26]. In contrast, in an experimental metastasis model where the cells are injected intravenously, Fra-1-depleted colon cancer cells show a stark defect in their ability to form metastatic foci in the mice. The observation that the few tumor foci that could be found in the lungs of mice injected with Fra-1 depleted cells were largely Fra-1-positive indicate that the tumor burden is mainly caused by Fra-1-proficient cells. We did not observe such a negative selection pressure against, nor a similar growth disadvantage of, Fra-1-depleted cells in primary tumor growth. These data together suggest that Fra-1 has predominantly a metastasis-related role in colon cancer.
Metastasis is a stepwise process in which the cells must first disseminate from the primary tumor, join the blood stream or the lymphatic system by digesting through the stroma and the basement membrane, extravasate at a secondary site, and grow out [27], [1]. Since our experimental metastasis model bypasses the initial steps of Error bars represent SEM. Statistics: One_Way ANOVA * p < 0.05, ** p < 0.01, *** p < 0.001, ****p < 0.0001. metastasis and the cells directly enter the blood circulation upon injection, the difference between the metastatic ability of Fra-1-deficient and Fra-1-proficient cells is most likely because Fra-1 deficient cells fail at survival in the blood stream or at extravasation at a secondary site. By using an inducible system, and therefore giving equal chances of survival after inoculation, we tested whether the cells would still suffer from an acute loss of Fra-1 after intravenous injection and establishment of (micro) metastases. Extravasation and micrometastasis formation have been shown to occur within the first 24 hours after inoculation [28][29][30]. In the absence of the support of other cancer cells or a stromal mimic like matrigel, HT29 cells depleted of Fra-1 were 56 times less successful in forming metastatic tumor foci, resulting in significantly fewer and smaller tumor foci in the lungs of the mice. Heterogeneous Fra-1 levels in the tumor foci were commonly observed, with many cells showing restoration of Fra-1 levels, suggesting a negative pressure against Fra-1 knockdown cells. These results demonstrate further that Fra-1 is critical for the metastatic growth of colon cancer cells.
Fra-1 is overexpressed in several cancers [31] and we show that its expression correlates with a poor 5-year survival chance of colon cancer patients. It has proven difficult to develop inhibitors against transcription factors, making Fra-1 an unlikely drug target, even though this could aid in improving the treatment options of colon cancer patients. Furthermore, expression levels of transcription factors do not necessarily reflect the level of their activity. It has been suggested that in a data-driven approach, targets acting downstream of a transcription factor, rather than the transcription factor itself, possess better distinguishing features, because they reflect the activity of the transcription factor [32]. For these reasons, we compared the RNA expression profiles of Fra-1 proficient and deficient HT29 cells. We found a total of 199 genes significantly regulated by Fra-1. Our Fra-1 classifier is able to stratify colon cancer patients into three groups based on their disease outcome: two good (subtypes 2 and 3) and one poor (subtype 1) prognosis groups. Consistent with the role of Fra-1 in metastasis, in patients with a poor prognosis, genes positively regulated by Fra-1 are overexpressed while genes negatively regulated by Fra-1 have a low expression. Based on this classifier, disease-free survival rates of patients in subtype 1 are significantly lower than those in subtypes 2 and 3. The same pattern is observed for disease-specific and overall survival rates, however with less significance Fra-1 is a transcription factor functioning in heterodimers with other components of the AP-1 family. We and others have previously shown that Fra-1 downregulation restores epithelial characteristics, including an epithelial-like morphology from a mesenchymal-like one, in breast cancer cells [7], [22] and colon cancer cells [33]. Moreover, several attempts to classify CRC based on gene expression data identified an EMT-related subtype associated with poor prognosis [34][35][36][37]. However, in colon cancer cell lines, we did not observe any change in the morphology, nor in E-cadherin or Vimentin protein levels upon Fra-1 depletion. We also failed to find any EMT genes significantly regulated in our RNA sequencing data. Furthermore, our colon cancer Fra-1 classifier has minimal or no overlap with other prognostic classifiers [35][36][37][38][39] nor with our Fra-1 breast cancer classifier [7]. Since Fra-1 has hundreds if not thousands of target genes, it is conceivable that it regulates several oncogenic processes via different mechanisms in different contexts. Furthermore, the basal expression pattern of Fra-1 target genes is conceivably not identical among different (cancer) tissues. On the other hand, earlier studies comparing metastatic to primary tumors to identify prognostic metastasis genes failed to identify Fra-1 [40][41][42], although Fra-1 was found to be upregulated in cancer cells compared to normal colon [40].
KEGG pathway analysis on this list of genes in the classifier revealed the focal adhesion and the Wnt pathways as overrepresented. Focal adhesions are known to be regulated by several AP-1 components, including Fra-1 [18], [43]. We confirmed the reliability of our results by validating the Fra-1-mediated regulation of several genes involved in focal adhesions by qRT-PCR. While expression of AP-1 components has been reported to be regulated by the Wnt pathway [10], [12], a reciprocal regulation between the AP-1 transcription factor complex and Wnt signaling has only been shown in an RNA profiling study [33] and awaits further validation. Here, we validated that Wnt pathway genes such as DKK-1, DVL-1 and Wnt10A are indeed positively regulated by Fra-1 in both HT29 and HCT15 colon cancer cell lines. Wnt10A plays an oncogenic role in renal cell carcinoma by activating the canonical Wnt pathway [44], and has been found to be highly expressed especially in the invasive fronts of esophageal cancer [45]. DVL-1 is a scaffolding protein that interacts with the Wnt receptor upon ligand binding and prevents the destruction of betacatenin, allowing it to be transported to the nucleus and to form a transcription factor complex with TCF/LEF [46]. Despite some conflicting reports about DKK-1 promoting migration and invasion [47], it is recognized as a tumor suppressor and an inhibitor of Wnt signaling [48]. In this context, one would expect DKK-1 levels to increase upon depletion of Fra-1. However, since DKK-1 expression is regulated by beta-catenin [49], reduced beta-catenin activity results in reduced DKK-1 levels in Fra-1-depleted colon cancer cells. Although we did not see beta-catenin being directly regulated by Fra-1 at the RNA level, reporter assays showed that the activity of beta-catenin is decreased upon Fra-1 depletion. Because beta-catenin is known to regulate EMT and metastasis, it is plausible that the pro-metastatic function of Fra-1 is partially dependent on beta-catenin activity, which is tightly regulated by the Wnt pathway.
In conclusion, we find that Fra-1 is a critical factor in driving metastasis of human colon cancer cells in vivo. Furthermore, we show that a Fra-1 classifier is a highly significant predictor of patient outcome, independent of disease stage. We propose that Fra-1-regulated genes may be explored as therapeutic targets for colorectal cancer.
beta-catenin reporter assay
The cells were seeded on 24-well plates (Costar) at 1*10 5 cells/well and co-transfected with 100ng of TOP or FOP constructs and 10ng of pRL using Lipofectamine 3000 reagent (Invitrogen) following manufacturers instructions. 48 hours after transfection, the cells were lysed and luciferase signals were measured in triplicates by dual-luciferase reporter assay (Promega). Transfection efficiencies were normalized by dividing the firefly luciferase signal by renilla luceriferase signal for each well. TOP/FOP ratio was calculated by dividing luciferase signal from TOP-transfected cells to FOP-transfected cells.
qrt-Pcr primers
Total RNA was isolated by harvesting the cells in Trizol (Invitrogen) 6 days after lentivirus transduction and extracting RNA by subsequent chloroform, isopropanol and ethanol treatments. Following DNase treatment for 1 hour at 37°C, cDNA was prepared by a reverse transcriptase kit (Invitrogen). The average values obtained from two independent experiments are presented.
survival analysis
From the Gene Expression Omnibus, we downloaded five publicly available data sets (GSE17536 [41], GSE17537 [41], GSE14333 [42], GSE33113 [50] and GSE37892) with gene expression data from primary CRC samples. Samples contained in both GSE14333 and GSE17536 were removed from GSE14333. Diseasefree survival and staging information was available for a total of 578 tumor samples contained in these data sets. Disease-specific survival and overall survival were only available for datasets GSE17536 and GSE17537, totaling 232 tumors. Differences in survival times were analyzed using the Mantel-Cox log-rank test as implemented in the survival package. We performed survival analysis combining all stages stratified by stage and gender, and stage-specific survival analysis.
rnA sequencing and generation of the Fra-1 classifier
Fra-1 was depleted from HT29 cells with two independent short hairpins. RNA was isolated by using Trizol and sequenced on a HiSeq 2000 System (Illumina). Data are available at NCBI Gene Expression Omnibus with the accession number GSE69415 (http:// www.ncbi.nlm.nih.gov). Data were analyzed using the R statistical environment [49]. Illumina sequencing data was processed using DESeq version 1.12 [49]. We derived a Fra-1 knockdown (Fra-1KD) gene expression signature comparing RNA sequencing data from HT29 cell lines without and with shRNA knockdown of Fra-1. Knockdown was performed using two different hairpins in triplicate. In total, three samples of the wild type cell line and two knockdown samples (each with a different hairpin) were sequenced. Genes were selected as differentially expressed if they were differentially expressed between wild type and Fra-1KD cells but not between replicates. Nominal p-values were corrected for multiple testing using the Benjamini-Hochberg procedure and corrected p-values < 0.1 were regarded as significant. We applied the Fra-1KD signature to the independent data set consisting of 578 samples described above. More specifically, we used non-negative matrix factorization (NMF) as implemented in the NMF package for R (http:// dx.doi.org/10.1186/1471-2105-11-367) to cluster the samples into three subtypes. We compared diseases free survival between these clusters as described above.
tumor xenografts and bioluminescence analysis
All animal work was done in accordance with a protocol approved by the Netherlands Cancer Institute Animal Experiment Ethics Committee. Female NOD/ SCIDIL2gamma mice aged 5-8 weeks were used for all in vivo experiments. 0,5*10 6 cells were injected into the lateral tail vein in 150ul PBS or subcutaneously into both flanks in a 100ul 1:1 mixture of growth factor reduced matrigel and complete medium. Subcutaneous tumors were manually measured twice weekly by a caliper. For bioluminescence imaging, the mice were intraperitoneally injected with 15mg/kg D-luciferin (Caliper Life Sciences) 15 minutes prior to imaging. The mice were anaesthetized and imaged with 60 seconds of exposure time (binning=8). Tumor burden in individual organs were quantified by injecting the mice with D-luciferin five minutes prior to sacrifice, harvesting the organs and imaging in a PBSluciferin mixture. The data was analyzed by Living Image software. For shFra-1 induction, mice were treated with 2mg/ml doxycycline in drinking water containing 10mg/ ml sucrose.
statistical analysis
Comparisons of two experimental groups were analyzed with two-tailed student's t-test. One-Way ANOVA corrected for multiple comparisons (Holm-Sidak) was used to compare more than two experimental groups (Prism; GraphPad Software). Error bars represent standard error of the mean (SEM).
Multivariate analysis was performed by fitting a Cox proportional hazards model to estimate hazard ratios for the subtypes by stratifying for gender and tumor stage.
AcKnoWlEdGmEnTs
The authors would like to thank Prof. Dr. Emile Voest for sharing constructs, Dr. Wilbert Zwart for his useful suggestions, and all the members of the Peeper Laboratory for their valuable input. We would also like to acknowledge the Animal Pathology Department for the immunohistochemistry stainings, Dr. Ji-Ying Song for her help interpreting the immunohistochemistry stainings, the Sequencing Core Facility for the generation of RNAseq data and Animal Caretakers for their assistance with the mouse experiments.
This work was financially supported by a grant from the Dutch Cancer Society (2009-4552) to SI and DP.
conFlIcts oF Interests
Authors do not disclose any potential conflicts.
|
v3-fos-license
|
2018-09-16T07:03:53.731Z
|
2018-09-01T00:00:00.000
|
52191165
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1420-3049/23/9/2299/pdf",
"pdf_hash": "ac051edf6b0d7c039ec1ecc234c70bc709ea3008",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:797",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "ac051edf6b0d7c039ec1ecc234c70bc709ea3008",
"year": 2018
}
|
pes2o/s2orc
|
Mediterranean Wild Edible Plants: Weeds or “New Functional Crops”?
The Mediterranean basin is a biodiversity hotspot of wild edible species, and their therapeutic and culinary uses have long been documented. Owing to the growing demand for wild edible species, there are increasing concerns about the safety, standardization, quality, and availability of products derived from these species collected in the wild. An efficient cultivation method for the species having promising nutraceutical values is highly desirable. In this backdrop, a hydroponic system could be considered as a reproducible and efficient agronomic practice to maximize yield, and also to selectively stimulate the biosynthesis of targeted metabolites. The aim of this report is to review the phytochemical and toxic compounds of some potentially interesting Mediterranean wild edible species. Herein, after a deep analysis of the literature, information on the main bioactive compounds, and some possibly toxic molecules, from fifteen wild edible species have been compiled. The traditional recipes prepared with these species are also listed. In addition, preliminary data about the performance of some selected species are also reported. In particular, germination tests performed on six selected species revealed that there are differences among the species, but not with crop species. “Domestication” of wild species seems a promising approach for exploiting these “new functional foods”.
Introduction
Since ancient times, wild plants have widely been used in traditional Mediterranean culture, and the link between wild plants and human life is a prominent feature. Wild plants are known to be used in ancient cultures for different purposes, such as food, medicines, production of goods (for example clothes), and magic and religious rituals. In particular, the use of wild edible plants in Europe has been mainly linked to periods of famine, therefore these herbs are called "famine food" [1]. Through the years, the use of these plants in traditional recipes of the Mediterranean diet has continuously increased, and in parallel, people have discovered their medicinal properties [2]. Today, the renewed interest in wild edible plants, and knowledge of the healthy role of phytochemical compounds, makes it possible to define them as "new functional foods". On the other hand, strong concern about safety, yield, and the phytochemical profiles of these species, makes it crucially important to establish a large-scale methodology of cultivation of the most promising species, in terms of both nutraceutical value and profitability. The hydroponic system represents a reproducible and efficient agronomic practice to maximize not only yield, but also to selectively stimulate the biosynthesis of targeted metabolites [3,4]. Another important aspect worth further analysis is the high variability in the percentage and mean germination time of wild edible species [5,6].
Wild Edible Plants in the Mediterranean Basin
The Mediterranean basin is characterized by a massive abundance of wild edible species. Of the selected fifteen wild species appearing to be the most promising for cultivation, the most representative compounds are detailed in Table 1. A plethora of bioactive compounds with medicinal and nutraceutical properties have been isolated from these species. Of them, silenan SV from Sinapis arvensis L. with immunomodulatory activity [7], and alliin in Allium ampeloprasum L. with powerful antioxidant activity [1], are well-known examples. Wild species are constitutively rich in secondary metabolites with antioxidant and healthy properties, and for these reasons could be represented as a new source of functional food. On the other hand, many of these properties were already known, even though not scientifically proven.
There is a difference between developing and industrialized countries in their habits of consumption of wild species. In developing nations, many edible wild plants are used as a source of food because the domesticated crop yield is not sufficient, whereas in most industrialized countries food supply is not a problem, thus wild plants are used to diversify a monotonous diet. Today, the concept of food in developed countries is profoundly modified. Indeed, consumers are no longer interested only in the supply of basic nutrients, they also demand the contribution of nutraceutical compounds. anti-mutagenic and anti-proliferative activities [45][46][47][48] antioxidant, against stomach ache, against rheumatic pain, against colds and cough, against liver insufficiency and hypertensive, anti-inflammatory and diuretic [62][63][64][65] The Mediterranean diet is rich in traditional dishes with wild edible species cooked in different ways, such as soups, pies, mixtures, boiled vegetables, and ravioli. According to popular tradition, some culinary uses of the species are reported in Table 2. Table 2. Traditional recipes prepared with the fifteen Mediterranean wild edible species that have been selected in this review for their aptitude for cultivation.
Toxicity of Wild Edible Plants
A high accumulation of nitrites, oxalate, and some other specific toxic compounds, is frequent in some edible species when collected in the wild, so a moderate use is suggested. For example, nitrites bind to hamoglobin and reduce the transport of oxygen to tissues [43]. Furthermore, the capacity of nitrites to combine with amines produces nitrosamines, which are carcinogenic substances [43]. Oxalic acid can reduce the availability of calcium through the formation of an insoluble complex of calcium oxalate, known as raphide, which is the primary cause of the most common kind of kidney stones [74]. Thus, the development of species-specific cultivation protocols can be useful to limit the accumulation of possible toxic compounds in the species that are well appreciated by consumers.
B. officinalis, one of the most commonly eaten wild plants, should be consumed with precaution as it contains considerable amounts of hepatotoxic pyrrolizidine-based alkaloids, such as thesinine, lycopsamine, and intermedine, which are mildly mutagenic. Acute poisoning by pyrrolizidine alkaloids causes haemorrhagic necrosis, hepatomegaly, and ascites. The subacute toxicity is characterized by occlusion of the hepatic veins and subsequent necrosis, fibrosis, and liver cirrhosis [74]. Another wild species, F. vulgare, contains two toxic phenylpropanoids: estragole with hepatocarcinogenic activity; and trans-anethole, having genotoxic and hepatocarcinogenic properties [75].
The concentration of oxalate, nitrates, and other toxic compounds found in the selected wild edible species is given in Table 3.
Exploiting the Possibilities of Cultivation of Some Wild Mediterranean Edible Species: Preliminary Results, Perspectives and Opportunities
The Food and Agriculture Organization defines wild edible plants as: "Plants that grow spontaneously in self-maintaining populations in natural or semi-natural ecosystems and can exist independently of direct human action" [80]. However, the gap between the increasing human population and food availability is constantly enlarging, which requires protecting some plant species from imprudent harvesting. In addition, considering food safety, the phytochemical properties of food are a hot topic, especially in Western countries [80]. Therefore, it seems important to find an efficient cultivation method for wild species (though this contrasts with the definition of "wild species") to allow a large-scale, high-yield production with a reproducible phytochemical profile, and in parallel, reduce the risks related to the presence of toxic compounds. Below we report some preliminary results from germination tests of some wild species (Table 4); and the biomass yield of R. acetosa and S. minor (Table 5), the two species that have demonstrated good potential for cultivation in a hydroponic system, an agronomic technique that ensures high yield and standardization in phytochemical profiles.
Germination Test
Usually, wild species collected in the wild are characterized by a reduced germination rate when compared to species that are commonly cultivated. In Table 4 we report the germination test of some potentially-interesting wild Mediterranean edible species, namely P. oleracea, R. acetosa, S. vulgaris, S. minor, T. officinale, and U. dioica. The germination rate was evaluated in Petri dishes in both dark and light (about 250-300 µmol quanta m −2 s −1 ) conditions at 27 • C and saturated relative humidity (25 seeds per Petri dishes; n = 3). The germination rate was calculated as the percentage of seeds germinated after ten days (Table 4). Within ten days, mean germination time was calculated as the mean of the days necessary to obtain the maximum germination ( Table 4). The germination rate was found to be highly variable under different conditions, for example, under light conditions germination was very low in U. dioica, medium in T. officinale and P. oleracea, and very high in S. vulgaris, R. acetosa, and S. minor (the latter was similar to that of commercial seeds of Eruca sativa (L.) Mill.). We did not observe differences between the germination rate under dark or light conditions (p > 0.05), except for P. oleracea and T. officinale, for which the rate was significantly reduced in dark conditions (Student's t test; p < 0.01). The mean germination time in light conditions was the lowest in P. oleracea, followed by R. acetosa, S. minor, and T. officinale, whilst U. dioica showed the highest. No remarkable differences were found among the species when the mean germination time of seeds grown under light was compared to that observed in dark conditions (p > 0.05).
The Cultivation
In addition to the low germination rate observed for some wild species, another critical point to overcome for the first stages of "domestication" of wild species is the establishment of a proper cultivation method. In many cases, wild species typically inhabit limiting environments, and are often slow-growing with very low biomass yield. The selection of the most promising genotypes can overcome this problem if implemented in association with the best cultivation practice that maximizes the biomass yield. Therefore, we utilized the hydroponic cultivation system (the floating system, Figure 1) given that it delivers better plant yields than soil culture, with less water usage and higher fertilizer efficiency. Some other authors [3] have indeed utilized the hydroponic system for the cultivation of wild medicinal plants, not only to maximize the plant yield, but even to selectively stimulate the biosynthesis of targeted metabolites, and/or to standardize the biochemical profile of these species [82]. Another important aspect that can be overcome with the utilization of a hydroponic system is the reduction of toxic compounds [83].
Taking into consideration the highest percentage of germination of R. acetosa and S. minor, these species were tested for their potential of cultivation in a floating system. Therefore, a pilot experiment was conducted in which these two species Preliminary results concerning the cultivation of R. acetosa and S. minor in the floating system showed a lower yield than that of some commercial species (Table 5). However, with an appropriate manipulation of the nutrient solution, growing condition, and genotype selection, the challenge to increase the biomass yield of these species can realistically be addressed. However, in this study only very preliminary results are given, and to make a complete picture of the performance of these two species further investigations are needed. In addition, similar experiments need to be carried out with other wild edible species interesting as a source of healthy bioactive compounds, and the organoleptic characteristics of these species also need to be evaluated, as they are an important aspect for consumers. Table 5. Biomass yield of hydroponically-cultivated Rumex acetosa and Sanguisorba minor, Valerianella locusta L. Laterr., and Eruca sativa. Data are the mean (± SD) of three independent replicates.
Perspective and Opportunities for Wild Edible Species Cultivation
Ethnobotanical surveys show that more than 7000 species of wild plants have been used for human food at some point throughout human history, and that edible species are a regular component of the diets of millions of people [86]. Recent studies also pointed out that many people worldwide still rely on local environmental resources, especially wild plants, for daily subsistence and healthcare [87][88][89][90]. In different regions lacking basic infrastructure and market access, wild gathering provides considerable subsistence support to local diets [91], and may also generate further benefits (e.g., selling surpluses) [92]. However, in some cases gathering from the wild, and family farming and/or smallholder agriculture, are not enough to meet nutritional needs in developing regions [93], as was expressed in a report on the state of food insecurity in the world [94], which states, "progress towards food security and nutrition targets requires that food is available, accessible and of sufficient quantity and quality to ensure good nutritional outcomes". Furthermore, in the near future, increasing human population, and continued globalization of trade and markets, along with ethnobotanical exploration, is expected to continue to increase awareness in the use of new plant materials. Therefore, the increase in demand for wild edible species will likely continue to threaten native species in some areas worldwide, as price differentials between wild and cultivated plants currently encourage unsustainable collection practices in some localities, especially in economically depressed regions that lack well-established rules for protecting wild plants [95].
Combining traditional knowledge and expertise with more recent concepts (e.g., public policies addressed to increasing human rights to food, health, and welfare, in addition to supporting plant biodiversity) is necessary for the benefit of future generations. The possibility to cultivate these wild Preliminary results concerning the cultivation of R. acetosa and S. minor in the floating system showed a lower yield than that of some commercial species (Table 5). However, with an appropriate manipulation of the nutrient solution, growing condition, and genotype selection, the challenge to increase the biomass yield of these species can realistically be addressed. However, in this study only very preliminary results are given, and to make a complete picture of the performance of these two species further investigations are needed. In addition, similar experiments need to be carried out with other wild edible species interesting as a source of healthy bioactive compounds, and the organoleptic characteristics of these species also need to be evaluated, as they are an important aspect for consumers. Table 5.
Plant Species
Biomass Yield (g Fresh Weight m −2 day −1 )
Perspective and Opportunities for Wild Edible Species Cultivation
Ethnobotanical surveys show that more than 7000 species of wild plants have been used for human food at some point throughout human history, and that edible species are a regular component of the diets of millions of people [86]. Recent studies also pointed out that many people worldwide still rely on local environmental resources, especially wild plants, for daily subsistence and healthcare [87][88][89][90]. In different regions lacking basic infrastructure and market access, wild gathering provides considerable subsistence support to local diets [91], and may also generate further benefits (e.g., selling surpluses) [92]. However, in some cases gathering from the wild, and family farming and/or smallholder agriculture, are not enough to meet nutritional needs in developing regions [93], as was expressed in a report on the state of food insecurity in the world [94], which states, "progress towards food security and nutrition targets requires that food is available, accessible and of sufficient quantity and quality to ensure good nutritional outcomes". Furthermore, in the near future, increasing human population, and continued globalization of trade and markets, along with ethnobotanical exploration, is expected to continue to increase awareness in the use of new plant materials. Therefore, the increase in demand for wild edible species will likely continue to threaten native species in some areas worldwide, as price differentials between wild and cultivated plants currently encourage unsustainable collection practices in some localities, especially in economically depressed regions that lack well-established rules for protecting wild plants [95].
Combining traditional knowledge and expertise with more recent concepts (e.g., public policies addressed to increasing human rights to food, health, and welfare, in addition to supporting plant biodiversity) is necessary for the benefit of future generations. The possibility to cultivate these wild edible species seems a promising approach to improve wild species yields and availability in a sustainable way, while protecting natural and crop biodiversity, as well as avoiding harmful anthropogenic contaminations of food, or the harvest of toxic species by inexperienced people. Research on the cultivation of wild species is in its infancy, and as also reported above, results indicate these species are still not competitive with more commercial species. However, there are significant possibilities to increase the yield of wild edible species has happened in the past for major crops, and this would encompass: (i) Selection of suitable species for their attitude to cultivation, (ii) breeding programs to selectively promote plant yield, and (iii) establishment of cultivation protocols to maximize plant performance. Of course, all these aspects should be considered in the context of local uses and economic possibilities; obviously the hydroponic technique represents just one of the possible cultivation techniques principally "affordable" in industrialized countries, whereas in other developing areas, other cultivation techniques have to be applied. In any case, cultivation will represent a step forward to: (i) Reduce the pressure of gathering in the wild, (ii) reduce the risk of food contamination, and (iii) diversify human diet and promote access to bioactive food. In this perspective, new ideas about food and health are welcome to respond to demand offood supply, quality, and safety.
Conclusions
Wild edible plants are widely present in the Mediterranean basin, and ethnobotany reports their cooking and medicinal use over a long time. Today, more than one billion people in the world utilize wild vegetables in their daily diet, especially in developing countries. Conversely, people of industrialized countries are "rediscovering" wild edible species for culinary use, as these wild vegetables add a variety of color, taste, and texture in their diet. It seems necessary to develop an efficient large-scale cultivation method for these species in order to standardize their yield and nutraceutical values. Nevertheless, in most cases, wild species can be toxic due to the high content of oxalic acid, nitrates, and sometimes, other toxic compounds [74]. Consequently, excessive consumption can cause some problems to human health, especially in infants [14]. Therefore, cultivation techniques can also be beneficial in controlling and limiting the accumulation of nitrates and oxalic acid. It is conceivable that with appropriate research addressed to improving these features, and with proper promotional marketing, these wild edible species may open up new commercial opportunities in the countries of the Mediterranean area. The nutritional and nutraceutical properties of these wild species make them especially charming considering the increasing attention amongst people towards the connection between food and health. In other words, some of these "neglected" species, sometimes considered as weeds in extensive major crop cultivation, may potentially become "new functional crops" in the not so distant future.
Funding:
The work was co-founded by the ERBAVOLANT project (Rural Development policy 2014-2020-Measure 16.1: Support to the Operational Groups of agricultural European Innovation Partnership (EIP-AGRI)).
Conflicts of Interest:
The authors declare no conflicts of interest.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2014-09-24T00:00:00.000
|
7369583
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.spandidos-publications.com/etm/8/6/1677/download",
"pdf_hash": "17303e3e2c8f47c587beb82ade87cb0d8ae60751",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:799",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "17303e3e2c8f47c587beb82ade87cb0d8ae60751",
"year": 2014
}
|
pes2o/s2orc
|
Significance of hypoxia-inducible factor-1α expression with atrial fibrosis in rats induced with isoproterenol
Atrial interstitial fibrosis plays a dual role in inducing and maintaining atrial fibrillation (AF). Hypoxia-inducible factor-1α (HIF-1α) has been reported as closely associated with renal, liver and pulmonary fibrosis diseases. However, whether HIF-1α is involved in myocardial fibrosis, and the associations between HIF-1α, transforming growth factor-β1 (TGF-β1) and matrix metalloproteinase-9 (MMP-9) remain unknown. Therefore, this area warrants studying for the significance of AF diagnosis and treatment. The present study investigated the expression of HIF-1α in atrial fibrosis and its possible mechanism in isoproterenol (ISO)-induced rats. The three groups of rats; control, ISO and ISO plus sirolimus [also known as rapamycin (Rapa)], were treated for 15 days and sacrificed to remove the myocardial tissues. The expression levels of HIF-1α, TGF-β1 and MMP-9 and their associations with atrial fibrosis were examined through histomorphology and protein and mRNA levels. The protein and mRNA levels of HIF-1α, TGF-β1 and MMP-9 in the ISO group were increased markedly (P<0.01) compared with the control group, while those in the Rapa group were clearly decreased (P<0.01) compared with the ISO group. The protein and mRNA levels of HIF-1α, TGF-β1 and MMP-9 were positively correlated (P<0.01) with atrial fibrosis (collagen volume fraction index), as were the HIF-1α, TGF-β1 and MMP-9 mRNA levels (P<0.01) and the mRNA levels between MMP-9 and TGF-β1 (P<0.01). During the process of atrial fibrosis in ISO-induced rats, HIF-1α promotes the expression of TGF-β1 and MMP-9 protein, and thus is involved in in atrial fibrosis.
Introduction
Clinically, atrial fibrillation (AF), which is of one of the most common types of arrhythmia, shows high disability and mortality rates in patients (1,2). In recent years, angiotensin II (AngII) and AF occurrence and maintenance has experienced increasing attention. The AngⅡ levels in AF increase and eventually induce atrial fibrosis (3). Atrial fibrosis plays dual roles in inducing and maintaining AF (4)(5)(6). A previous study (7) showed that the expression level of hypoxia-inducible factor-1α (HIF-1α) is associated with AngⅡ, which is involved in renal fibrosis. However, no associated study has been conducted for myocardial fibrosis. The present study refers to the method by Zhang et al (8), which used a subcutaneous bolus injection of isoproterenol (ISO) to induce AngII expression to establish an atrial fibrosis rat model. The HIF-1α inhibitor (9) [sirolimus, also known as rapamycin (Rapa)] was administered to examine the protein expression and mRNA levels of AngⅡ, HIF-1α, transforming growth factor-beta 1 (TGF-β 1 ) and matrix metalloproteinase-9 (MMP-9) in myocardial tissue in the atrial fibrosis rat model, and thus the present study investigated their relevance and the possible mechanism of how HIF-1α following the ISO injection would induce atrial fibrosis.
Materials and methods
Animal model. Thirty healthy male Wistar rats, 180±20 g body weight, were purchased from the Lanzhou School of Medicine Animal Center in Lanzhou University (Lanzhou, China), and were maintained at 20-25˚C with lighting-controlled circadian rhythms (8:00 am-10:00 pm) under normal feeding with free food and water. The rats were randomly divided into three groups of 10 rats: Control, ISO and ISO plus sirolimus (Rapa). The animal experiment was approved by the Animal Ethics Committee. The study referred to the method by Zhang et al (8) to establish an atrial fibrosis rat model. The ISO group rats were administered multipoint subcutaneous bolus injections of hydrochloric acid ISO (batch no. 080705; Shanghai Hefeng Pharmaceutical Co., Ltd., Shanghai, China), 5 mg/g/day, and once per day for seven days. The Rapa-intervention group rats were provided sirolimus oral solution (batch no. 110901; Hangzhou Zhongmei Huadong Pharmaceutical Co., Ltd., Hangzhou, China), specification 50 ml:50 mg, initiated on the second of the same ISO treatment as in the ISO group, 3 mg/kg/day (10), once per day and gavage for 14 days, with the interval between gavage and subcutaneous injection being 4-6 h. Simultaneously, the control and ISO groups were separately administered an equal amount of double-distilled water for stomach gavage as for the Rapa group. All the rats were sacrificed by cervical dislocation after 15 days.
Sample collection and preservation. Along the coronary plane maximum transverse diameter, partial myocardial tissue was cut and placed in 10% formaldehyde solution for 24 h fixation. Following this, the tissue was paraffin-embedded and five serial slices (4-µm) were cut from it. Two slices were used for hematoxylin and eosin (HE) and Masson staining to observe the extent of myocardial fibrosis, which used collagen volume fraction (CVF) as the atrial fibrosis index. Immunohistochemistry (IHC) was performed on the remaining three slices to detect the expression of HIF-1α, TGF-β 1 and MMP-9. The samples were obtained from the remaining cardiac tissue for detection of AngⅡ by radioimmunoassay, and HIF-1α, TGF-β 1 and MMP-9 expression levels by western blot (WB) analysis and reverse transcription quantitative polymerase chain reaction (RT-qPCR). The remaining cardiac tissue was cryopreserved in liquid nitrogen.
Detection of AngⅡ level in the myocardium in rats. A radioimmunoassay kit (Beijing North Institute of Biotechnology, Beijing, China) was used to detect the concentration of AngII.
Observation of myocardial fibrosis. Myocardial tissue underwent formaldehyde fixation, dehydration, transparency, embedding in paraffin and slicing into two 4-µm sections. HE and Masson staining were subsequently performed, and the sections were mounted and observed by light microscope and radiography.
The CVF [CVF = (collagen area/total area of view field) x 100%] calculation was as follows: Three non-vascular vision images (magnification, x400) were selected from each Masson staining slice. The image scanning software, Image-Pro Plus 6.0 (Media Cybernetics, Inc., Rockville, MD, USA), was used for image analysis and myocardial CVF calculation.
IHC. Myocardial tissue paraffin sections underwent a number of steps, including dewaxing, antigen hot fix, blocking solution incubation, first and secondary antibody incubation, diaminobenzidine coloration, counterstaining, transparency and mounting. Phosphate-buffered saline (PBS) was used to replace the first antibody as the negative control, and the positive control was provided in the IHC kit (Boster Biological Tech Ltd., Wuhan, China). Rabbit anti-rat antibodies of HIF-1α, TGF-β 1 and MMP-9 were all purchased from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA, USA). The first antibody was diluted by 1:50, and the secondary goat anti-rabbit antibody was provided by Jackson ImmunoResearch Laboratories, Inc. (West Grove, PA, USA) under the dilution of 1:500.
WB analysis. Myocardial tissue was placed in liquid nitrogen pre-cooling mortars, and a proper amount of protein lysate was added and centrifuged. Subsequent to removal of the supernatant, a small portion of the precipitation was used for the determination of protein concentration. Protein (100 µg) was mixed with 5X protein electrophoresis loading buffer, placed into a boiling water bath for 5 min, centrifuged at 12,000 x g for 1 min and fully loaded along with the protein marker (Fermentas, Waltham, MA, USA). In Tris-glycine buffer (pH 8.0) and under an 80 V voltage, a 1.5-2 h electrophoresis was performed, followed by a 20 V constant voltage and 1.5 h transferring to nitrocellulose film (Millipore, Billerica, MA, USA). The film was removed and 5% skimmed milk powder-sealed liquid/PBS with Tween (PBST) was added under room temperature and slow agitation was performed for 1.5 h. First antibody incubation was as follows: HIF-1α, TGF-β 1 and MMP-9 antibodies were purchased from Santa Cruz Biotechnology, Inc., and diluted 1:300 in 5% skimmed milk powder/PBST at 4˚C overnight; and anti-GAPDH (Santa Cruz Biotechnology, Inc.) monoclonal antibody was 1:10,000 diluted in 5% skimmed milk powder/PBST at 4˚C overnight. PBST was used to wash the membrane three times for 10 min each. Secondary antibody incubation was performed as follows: Goat anti-rabbit antibody was 1:2,000 diluted in 5% skimmed milk powder/PBST and sheep anti-mouse (second for GAPDH) was 1:2,000 diluted in 5% skimmed milk powder/PBST, incubated at room temperature for 1 h, followed by washing of the membrane three times with PBST for 15 min each. The film was placed in the SuperSignal ™ West Pico (Pierce, Rockford, IL, USA) for 2 min, tableted and developed to detect specific protein bands. The gel imaging system was photographed and the strip area and gray analysis of the protein zone was expressed by the integral gray value (D).
Statistical methods. SPSS 17.0 software (SPSS, Inc., Chicago, IL, USA) was used for statistical analysis. All data are shown as mean±standard deviation. Single factor analysis of variance was used for inter-group comparison, least significant difference method was used for pairwise comparison, and the Pearson method was used for product-moment correlation analysis. P<0.05 was considered to indicate a statistically significant difference.
Results
Myocardial interstitial fibrosis. HE staining showed that the control group exhibited normal space between the myocardial nuclei, regular shape of the nuclei along the heart muscle and structured alignment of myocardial interstitial fibrosis (Fig. 1A). The ISO group had increased myocardial interstitial components and widened nuclear space. Fibrosis and cardiac muscle fiber shapes were disordered (Fig. 1B). The Rapa group showed a degree of reduction compared with the ISO group in the shape of the nuclei, and myocardial interstitial fibrous and interstitial fiber arrangements were disordered (Fig. 1C).
The Masson staining and CVF results showed that under light microscopy, normal myocardial interstitial collagen components appeared green (light green counterstained), the nuclei appeared blue, and myocardial fibers, cytoplasm and red blood cells appeared red (Fig. 1D-F). The image was analyzed to calculate the level of atrial fibrosis (CVF index), taking the average value as the measurement value. The control group (15.482±0.837%) did not show atrial fibrosis, and the Rapa group (16.730±1.052%) showed a greatly reduced atrial fibrosis level compared with the ISO group (86.704±1.982%) (P<0.01). The difference between the Rapa and control groups did not show a statistically significant difference (P>0.05).
Ang Ⅱ levels in the myocardium of rats by radioimmunoassay. The result showed that the ISO (139.402±4.431 ng/l) and Rapa (132.712±5.316 ng/l) groups had significantly increased Ang II levels (P<0.01) compared with the control group (31.172±7.271 ng/l).
IHC. Immunohistochemical detection of HIF-1α, TGF-β 1 and MMP-9 expression showed claybank color for positive staining in myocardial cells of the rats under microscopy, distributed throughout the myocardial cytoplasm. The ISO group showed stronger expression levels than the control group, while the Rapa group showed markedly reduced levels of expression compared with the ISO group (Fig. 2). Table I. Western blot analysis and RT-qPCR results of HIF-1α, TGF-β 1 and MMP-9.
A B C D F E
WB analysis and RT-qPCR. The HIF-1α, TGF-β 1 and MMP-9 at the mRNA and protein levels were higher in the ISO group than those in the control group. The mRNA and protein expression in the Rapa group were significantly lower than those in the ISO group (Table I and Fig. 3).
Discussion
The majority of studies show that the renin-angiotensin system (RAS) is activated by AF, and simultaneously, as a major effecter molecule of the RAS in circulating and certain tissues (11), AngⅡ levels increase and eventually induce atrial fibrosis (3). The present study identified that AngII levels in the ISO and Rapa groups in myocardial tissue were significantly higher than the control group after seven days of subcutaneous multiple high-dose continuous injection of ISO in rats, which implies that following ISO injection, RAS was activated and caused increased expression levels of AngⅡ in myocardial tissue. According to the morphological observation, no atrial fibrosis was identified in the control group, while the ISO group had significant myocardial interstitial fibrosis, which indicates that high expression of AngⅡ in myocardial tissues may be involved in atrial fibrosis formation, and this has been confirmed by a previous study (12).
The level of HIF-1α expression, as a tissue hypoxia index product, will increase during tissue hypoxia. Hypoxia has been linked to fibrosis (13), including in the liver (14,15), lung (16) and kidney (17). Although studies have shown that the increase in HIF-1α gene expression in the myocardium may be involved in its structural changes, including atrial fibrosis (18,19), no study has been conducted for the elevated AngⅡ-induced myocardial fibrosis. The present study, from the aspects of pathology and protein and mRNA expression levels, found that the expression of HIF-1α in the ISO group was significantly higher than that in the control group, while when administering the HIF-1α inhibitor rapamycin intervention in the Rapa group, HIF-1α expression decreased significantly, and was positively correlated with myocardial fibrosis degree (CVF), which proves that AngⅡ is involved in atrial fibrosis by regulating the expression of HIF-1α.
TGF-β 1 , as one of the AngⅡ downstream factors (20), is associated with myocardial fibrosis occurrence (21). The present study also provides evidence for this. HIF-1α has a close association with TGF-β 1 and can regulate its expression (22,23). The present study shows that in the ISO group, TGF-β 1 mRNA expression levels were much higher than those in the control group, and its expression in the Rapa group was significantly reduced. The expression of HIF-1α and the extent of myocardial fibrosis was positively correlated, which implies that HIF-1α can facilitate the expression level of TGF-β 1 and thus induce atrial fibrosis.
In patients with AF, it has been reported that the atrial HIF-1α level rises with the increasing expression level of MMP-9 (24). MMP-9, as an important protease in the MMP family, is associated with myocardial matrix remodeling (25), and its increasing activity can result in acute myocardial fibrosis (26,27). The present study shows that in the ISO group, HIF-1α and MMP-9 mRNA expression levels were significantly increased. They were positively correlated between themselves and also positively correlated with atrial fibrosis. Furthermore, HIF-1α can be involved in myocardial fibrosis formation by regulating the MMP-9 expression level. TGF-β 1 is closely associated with MMP-9 and can regulate its gene level expression (28) and thus induce its synthesis (29). In the present study, it was also observed that in the ISO group, the MMP-9 mRNA expression level was markedly higher than in the control group, while in the Rapa group this was significantly decreased. The expression level of TGF-β 1 was also found to be positively associated with the degree of myocardial fibrosis. These results imply that TGF-β 1 expression levels increase and cause the high expression of MMP-9, and thus aggravates myocardial fibrosis, which could be a possible mechanism in the AngⅡ-induced atrial fibrosis model.
In conclusion, the present study shows that in the ISO-induced atrial fibrosis model, AngⅡ, HIF-1α, TGF-β 1 and MMP-9 were all highly expressed. By inhibiting the expression of HIF-1α, the expression levels of TGF-β 1 and MMP-9 decreased accordingly, and the extent of myocardial fibrosis was also reduced. Considering the association among these factors, we can infer that, during the atrial fibrosis formation, HIF-1α promotes the expression of TGF-β 1 and MMP-9 protein. A possible signal transduction pathway among HIF-1α, TGF-β 1 and MMP-9 may exist, which could contribute significantly to the further study of the pathogenesis of AF and a new direction of drug research and development in AF therapy.
|
v3-fos-license
|
2018-12-22T00:16:13.754Z
|
2016-01-01T00:00:00.000
|
62899197
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.scielo.org.za/pdf/sajae/v44n2/11.pdf",
"pdf_hash": "07d8474e77f2090119fb65b05e96b07d537a96b3",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:802",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Business",
"Economics"
],
"sha1": "31b9f581f316893a271153f64fb96cc00c345dd1",
"year": 2016
}
|
pes2o/s2orc
|
MOBILE PHONE USE BY SMALL-SCALE FARMERS : A POTENTIAL TO TRANSFORM PRODUCTION AND MARKETING IN ZIMBABWE
Smallholder farmers are major contributors of horticultural produce. Women’s contribution is noteworthy. Meeting market demand on time and avoiding market ‘floods’ is a challenge among communal farmers, leading to post harvest losses partly due to lack of information and uninformed decision making. Mobile phones have potential to connect farmers to markets, close the information gap and enable informed decisions. Currently most farmers target a few markets leading to market ‘floods’, low prices and fresh produce deterioration while some potential markets remain untapped. A survey conducted in 2015 covering 131 farmers in Svosve-Wenimbi, Marondera district of Mashonaland East province in Zimbabwe evaluated mobile phone ownership and use in farming; and its potential in transforming production and marketing. High literacy and mobile phone ownership of 95.32% and 94.45% respectively was reported, with 16% already accessing advisory services over mobile phone. 51.1% utilised various mobile phone services including accessing market information on inputs and produce, advisory services, weather data, mobile phone money transfers for transaction and crop insurance. By using mobile phones farmers made informed decisions and saved time and transport cost. Mobile phone ICT can promote better production, marketing, food security and livelihoods and more farmers may adopt the technology.
INTRODUCTION
There has been rapid transformation and growth in the use of ICT including mobile phones in Zimbabwe and Africa as a whole in the recent years (Jensen, 2001;eTransform AFRICA, 2012).Unlike elsewhere in some Sub-Saharan African countries where limited infrastructure and trained personnel as well as general population literacy draw back the adoption of ICTs (Ewing, Quigless, Chevrolier, Verghese & Leenderste, 2014), Zimbabwe has the infrastructure, with 6 900 km optic fibre connections, three major mobile service providers (POTRAZ, 2014;TECHZIM, 2015), high literacy rate of 98% (ZIMSTATS, 2011) and high mobile network subscription.By 2014 mobile phone subscription rate was 106% (POTRAZ, 2014;TECHZIM, 2014) that was characterized by some dual Sims phones and multiple phone ownership, with 47.5% (6.1 million) internet subscribers of which 99% is accessed on mobile phones (POTRAZ, 2014;TECHZIM, 2015).Zimbabwe has a population of 15.5 million (World Bank, 2015) with 70% living in the rural area (UNICEF, 2015) depending on agriculture for food security and a livelihood (FAO, 2015).According to Technomag (2014) mobile phone subscription in the rural population was 63% in 2013.Mobile technology can potentially transform all forms of business including agriculture (Jensen, 2001;Deloitte, 2012;Irefin, Abdu-Azeez & Tijani, 2012;World Bank, 2012;Ewing et al., 2014;Oladele, 2015).Traditionally, communication in rural Zimbabwe has always been limited but mobile ICT has significantly connected these areas to others locally, regionally and internationally.Mobile ICT has the potential of improving production among rural area small holder farmers by overcoming the traditional isolation (Nyamba & Mlozi, 2012;Oladele, 2015).
Agriculture plays a pivotal role in Zimbabwe's socio-economic development as well as food security and has the potential to significantly reduce poverty, enhance economic growth and consolidate economic stability.It is the major backbone of the country, contributing close to 16% towards the GDP in 2010 (FAO, 2010).In 2013 and 2014 agriculture contributed 12% and 14% respectively towards value added GDP (World Bank, 2015).Seventy-eight percent of the population living in rural areas is involved in small holder farming for food security and a livelihood (FAO, 2015).The rural population rely on agriculture as subsistence producers or agriculture workers for food security and a livelihood (FAO, 2006).Various agriculture produce from the small holder and commercial farming systems provides food for the nation, raw materials for the industry and agriculture exports playing an important role in food security and the economy of the nation.Agriculture therefore provides employment to the rural population as well as in secondary agriculture industries.It is essential that farmers access advisory services and market information for both inputs and produce for them to make informed decisions.Mobile phones enable farmers to get such information and make informed decisions (Tadesse & Bahiigwa, 2015).It is essential to raise awareness and promote the use of this ICT platform among small holder farmers to keep them up to date on weather, farming advice and markets for informed decision, better planning and improved production.
Research done in Ethiopia, Uganda, Tanzania and China has shown that mobile phones can be used to provide information to the farmers and rural residents through SMS and multimedia-supported systems (Martin & Abbott, 2008;Wei & Zhang, 2008;Nyamba & Mlozi, 2012;Chhachhar, Qureshi, Khushk, & Maher, 2014;Tadesse & Bahiigwa, 2015).This has been made possible through both public and private sector initiatives.According to Martin & Abbott, (2008) and Wei & Zhang (2008), mobile phone use offers real benefits to rural residents in the area of connectivity to the outside world as well as reduced unnecessary commuting to urban centres.From a socio-economic point of view, mobile phones enable easier and more effective sharing of information and knowledge among individuals, with institutions, suppliers and markets.With information on the supplies markets and prices, markets for products and product prices, weather data and advice farmers are able to make informed decisions (Nyamba & Mlozi, 2012;Tadesse & Bahiigwa, 2015).In a study by Martin & Abbott (2008), mobile phone use was reported to enable farmers to consult with extension advisory and veterinary consultants on daily basis as well as in emergency cases like when livestock get sick.Elsewhere including Zimbabwe, farmers make and receive payments as well as insure crops using mobile services (Econet, 2015).By consulting remotely on mobile phones for supplies and product markets, prices, for advice as well as by using mobile financial transactions, farmers save time and finances that would have been used on travelling (Deloitte, 2012;Nyamba & Mlozi, 2012).
Information management plays a major role in today's world of information abundance and outflow.Information technologies represent means of distributing information and knowledge in much faster and efficient way (Krishan, 2000).This has been noted to help farmer groups and extension advisors to coordinate meetings and to seek opinions of members who are not present for the meeting (Martin & Abbott, 2008).Armed with information farmers make informed decision, may produce better and get better markets and prices.The objectives of this study, therefore, were to i) describe the Svosve-Wenimbi farming system and ii) evaluate mobile phone ownership among the Svosve-Wenimbi area small scale farmers iii) investigate the awareness of farmers on the usefulness of mobile phones in farming and iv) to establish if farmers are already using mobile phones for agribusiness and advisory services.This study helps to evaluate the prospect in the use of mobile telephones among farmers as a tool of information technology in production and marketing.
RESEARCH METHODOLOGY
A key informant interview was conducted in the Svosve-Wenimbi farming community of Marondera district.This was followed up with a farmer survey.For comparison between genders, the farmer survey included both male and female farmers from the area studied.The interviews and surveys were conducted from July 2015 to September 2015.Stratified random sampling was used.The area was stratified into wards (four).At least 30 farmers were randomly picked from each ward and interviewed.A total of 131 farmers were interviewed.
The data collected was captured, processed and analysed using the statistical package for social sciences (SPSS).
Population description
Descriptive statistics were generated to describe the population and farming system.Chisquare test was done to evaluate mobile phone ownership and use in agribusiness or for farming purposes.
The Logit Model
The study uses a binomial logit model to analyse the socioeconomic factors affecting the households' decision to adopt mobile ICTs in agriculture.The dependent variable is dichotomous i.e. households decision to adopt or not adopt mobile ICT in agriculture.The binary logit model in this case is appropriate because it considers the relationship between a binary dependent variable and a set of independent variables (Fosu-Mensah, Vlek & MacCarthy, 2012).The model uses a logit curve to transform binary responses into probabilities within the 0 -1 interval.In the logit model the parameter estimates are linear and assume a normally distributed error term (µ).The specification of the model is as follows: Y= ƒ(X 1 , X 2 , X 3 , X 4 , X 5 , X 6 , X 7 , X 8 ) Where: Y = adoption status (1=adopted, 0=not adopted), X 1 = gender (1= male, 0= female) X 2 = age, X 3 = level of education, X 4 = marital status, X 5 = cattle owned, X 6 = types of crops grown (1= commercial, 0= consumption), X 7 = source of extension (1= public, 0= otherwise), X 8 = farm income as described in Table 1.Positive/ negative +/-Income The higher the farm income, the higher is access to markets and more use of ICTs.The income taken is the sum of the on-farm and non-farm income as mobile technologies can be bought from any of the two.
Demography
Of the 131 farmers interviewed, 51.15% were male and 48.85% were female (Figure 1).The farmers' ages ranged from 16 years to over 90 years (Figure 2).This shows that the survey covered a cross section of the farming community in Svosve-Wenimbi area of Marondera District.Of the respondents 70% were married while 30 % were single (Figure 3), widowed or separated/ divorced (p < 0.001).As shown in Figure 4, out of the 70% married farmers, 94.4% were farming together (p < 0.001) showing the high dependence of the rural community on agriculture.This indicates small holder farming is paramount for the rural population as reported by FAO (2015).It is therefore important to support optimal production in small holder farming for food security and to promote livelihoods.Extension is one of the critical areas that will promote better production in this farming system.2).These results confirm the high literacy level in Zimbabwe as reported by ZIMSTATS (2011).This is a good indication of high chances of adoption of new information and technology.Annual income for the households in the area of study ranged from less than US$100 to over US$400 as shown in Figure 5 with the bulk of the farmers (71%) earning less than US$101 to US$400.This show that the households were living on less than US$2 a day.3).
Farming system
Svosve-Wenimbi area is characterised by a mixed farming system involving production of cash crops like tobacco and horticultural crops, maize as a staple food crop and some pulses including groundnuts and cowpeas along with the rearing of livestock that include cattle, goats and fowls for domestic and commercial purposes.Average land size in the communal farming area was 3.7 ha ranging from 0.45 ha to 6.4 ha and 23.1 ha in the resettled medium scale farming area ranging from 10 ha to 38.8 ha.The production levels of maize as a staple food crop and a major determinant of food security was evaluated.The survey showed that 82.4% of the respondents were growing maize using from less than 10% to over 100% of the land they own (Table 4) with more than 30% using up to 20% and 41.6% using 21 to 30% and about 39% using 31-to 50%.Respondents that used 51-to 100% of their land for maize were 19.5% while 1.9% used more than 100% of the land they owned.The farmers who planted 100% of their land to maize, owned 0.6 ha and 0.75 ha arable land.With the need to increase their area of production they rented additional land from other farmers.Some farmers (17.6%) did not grow any maize.The maize yield averaged 1.3 t ha -1 and 1.0 t ha -1 for the communal and resettled farmers respectively ranging from 0.2 t ha -1 to about 5 t ha -1 .The average yield is less than 1.5 t ha -1 making production levels insufficient to meet the calorie requirements (Smale & Jayne, 2003) with some farmers producing well below 1 t ha -1 hence the need to promote better production.
Extension is one of the tools that can be used to transfer technology and information that promote better production (Rivera, Qamar & Crowder, 2001).The use of various modes of extension among farmers was evaluated (Table 5).This study showed that extension through radio programmes was accessible to 57.3% of the farmers.Field Extension Staff reached 56.5 %.The two mediums of extension were the most common.Agriculture Shows were also relatively popular (38.2%) compared to newspapers at 21.4% and farmer groups at 18.3%.
Mobile phones ranked at sixth position (16.8%)out of the 11 methods evaluated, better than television programmes and company agronomists both utilised by 10.7% of the farmers and pamphlets used by even less (6.9%).Mechandisers and Non-governmental organisations (NGOs) were the least popular.Even though a relatively new technology compared to all the other methods, use of 16.8% for the mobile platforms indicates reasonable adoption and potential for futher adoption of phones for this and other purposes in farming especially consideing the high mobile phone ownership of 94.5% among the farmers and fair distribution of mobile phones between gender as shown in Figure 6.
Considering mobile phone ownership is high covering almost the entire population, they provide a potential tool for development and transformation.At the time the survey was conducted respondents were already using mobile phones for different activities that support farming.Some farmers (54.7%) were using mobile phones to make payments for inputs and services and to receive payments for farm produce.Farmer groups also utilized mobile phones with 17.6% of the farmers affiliated to these groups using mobile phones to convene meetings as well as to discuss virtually.With reference to current mobile platforms, 16.0% were accessing internet on mobile phones, 17.9% were using the WhatsApp platform, 9.9% Facebook and 1.1% Twitter applications.Farmers were not conversant with other social networking platforms, besides these three platforms.
Use of internet in general and advanced internet applications like WhatsApp, Facebook and Twitter was below 20% but the fact that some farmers were using the platforms is promising since adoption usually start with a few and will spread to others as they share information.Farmers share information as indicated by 35.5% of the respondents who have shared information on some of the uses of mobile phones that they had found useful in farming.With 53.1% of the farmers confirming that using mobile phones has improved marketing, more farmers are likely to adopt use of ICTs.Marketing was improved in the sense that farmers could check market prices for inputs or produce and select the best supplier or buyer.Farmers could confirm availability of products and make appointments with buyers or suppliers without travelling and this saved on time travelling costs.Farmers also received updates on products, product prices and produce price from different sources including suppliers, markets and other platforms like Ecofarmer and farming or marketing associations thus saved them on time, travel and assisted in decision making.Most of the farmers (72.5%) were of the perception that mobile phones were useful in farming.
Findings from this study agree with studies conducted elsewhere that farmers received information on mobile phones (Chhachhar et al., 2014;Chhachhar & Hassan, 2013;Martin & Abbott, 2008;Mwakaje, 2010;Nyamba & Mlozi, 2012;Tadesse & Bahiigwa, 2015) that enabled them to make informed decisions (Nyamba & Mlozi, 2012;Tadesse & Bahiigwa, 2015), got better market prices (Mwakaje, 2010), weather information (Nyamba & Mlozi, 2012), saved time and transport costs by overcoming geographical distances through voice calls and text messaging (Deloitte, 2012;Nyamba & Mlozi, 2012) consulted agriculture extension staff or advisors (Martin & Abbott, 2008;Oladele, 2015) and utilized mobile financial transactions (Deloitte, 2012;Nyamba & Mlozi, 2012).The use of other mobile phone applications like internet and WhatsApp were reasonable for rural population.These show potential for adoption of specific farming applications that are being used elsewhere for general crop agronomy, fertilizer, weed, pest and disease management, livestock management, market and farmer location for specific products as well as alternative markets and market prices.
The Logit model results
The logit model was tested for goodness of-fit considering gender, level of education, marital status, cattle owned, types of crops grown, sources of extension and farm income.All the measures in Tables 7 and 8 show the overall models' goodness-of-fit tests.Results show that the model specification was overall good.Most of the variables tested had the expected hypothesized signs (Table 8).From the logit regression results in Table 9, cattle owned and total income fit to work positively to influence use of mobile technology while gender, age, years of education, commercial activities and source of extension fit to work negatively to influence the use of mobile technologies among smallholder farmers in Marondera.
Table 9 shows the results of the variables that were considered in the model.Gender, marital status, years of education, number of cattle owned and the source of extension information did not significantly affect mobile phone use for farming purposes at p < 0.05.Young farmers used mobile technologies in farming than their older counterparts (p < 0.05).New technologies are more appreciated by the generation in which they are introduced.
Farmers who were inclined towards commercial activities (especially the growing of tobacco and horticulture) were more likely to adopt the use of mobile technologies in agriculture (p < 0.01).From the regression results, mobile phone use was higher among the farmers who were less inclined towards commercial activities.The expectation is that farmers need to be in constant interaction with input and outputs markets for them to farm viably.Mobile phones offer them an opportunity to have this interaction with minimal transaction costs and without disturbing timely production activities on farm through constant visits to output and input markets.
As supported by the results from the wealth factor, income has a positive and significant influence on use of mobile technologies in agriculture (p < 0.05).Higher disposable incomes results in higher expenditures and more considerations for non-food items such as mobile phones as understood from Maslow's hierarchy of needs (Maslow, 1970).More disposable income may have made mobile phones more affordable for the farmers.
CONCLUSION AND RECOMMENDATION
Svosve-Wenimbi was characterized by mixed farming that included mixed food (field and horticulture) and commercial crop production as well as the rearing of various livestock.Mobile phone ownership was high at 94.5% and use for agriculture business that included acquiring production and market information, planning meetings and financial transactions was 57.5%.With 72.5% farmers believing that mobile phones were useful in farming, probability of adoption of current uses at the time of study among non users as well as the new applications and uses among all farmers is high.Adoption of mobile phone use for farming purposes was influenced by age, commercial farming activities and total income.
Extension or farmer schools to raise awareness on the different uses of mobile phones in farming may improve adoption of use of mobile phones in farming.Researchers and extension staff can also develop some simple applications that can be used by farmers to verify agronomic, livestock practices and recommendations as well as market locations and prices without the need to travel for consultation with advisors, suppliers and buyers.Mobile operators are also constantly improving the technology with value addition of applications that make use of artificial intelligence and improvements in mobile money transfer services that facilitate greater financial inclusions within farmers.This will promote better production and marketing and reduce transport costs.Using mobile phones in extension will achieve the following: curb transport challenges where in some cases extension staff may have no vehicles or sufficient fuel allocations and enable farmers to consult and extension to advise in emergency cases where the farmer/ extension officer may fail to get to the extension officer/ farmer on time to save livestock or crops; enable extension to disseminate information rapidly and efficiently over the phone compared to organising meetings or farm visits; enable farmers to get current weather, market, literature and production information; makes coordination of extension activities like training and shows easier; extension officers will save time lost in travel and use it on advisory service; extensionists can also consult quickly with specialists and give farmers advice on time; farmers save time when they resolve small issues by chat or voice call consultations.On the other hand, if most of the extension work is done via mobile phone service with few or no field visits extension staff may lose relationships with farmers and may lack a true picture of what is on the ground.Use of mobile phones should be maximized in extension but should be combined with conventional extension approaches involving farmer-extension contact and farm visits.Farmers therefore need to embrace the knowledge that not only can the mobile device be used for communication purposes but it can bridge the time gap with regards to agronomic information dissemination.
Figure 1 :
Figure 1: Gender of farmers interviewed
Figure 5 :
Figure 5: Average household income per annum
Table 1 :
Description of variables expected signs of model
Table 2 :
Literacy in the Svosve-Wenimbi area
Table 4 :
Proportion of farmers' land used for maize production
Table 6 :
Mobile phone uses in Svosve-Wenimbi Area of Marondera by gender
|
v3-fos-license
|
2018-04-03T01:25:55.844Z
|
2014-06-19T00:00:00.000
|
15661514
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/dm/2014/831364.pdf",
"pdf_hash": "82a8ac724e7214607ce9a527f803d5629916994e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:803",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "c0181825fe7a633833e96760c551760ef1dc589e",
"year": 2014
}
|
pes2o/s2orc
|
Clinical Usefulness of Novel Serum and Imaging Biomarkers in Risk Stratification of Patients with Stable Angina
Inflammatory mediators appear to be the most intriguing yet confusing subject, regarding the management of patients with acute coronary syndromes (ACS). The current inflammatory concept of atherosclerotic coronary artery disease (CAD) led many investigators to concentrate on systemic markers of inflammation, as well as imaging techniques, which may be helpful in risk stratification and prognosis assessment for cardiovascular events. In this review, we try to depict many of the recently studied markers regarding stable angina (SA), their clinical usefulness, and possible future applications in the field.
Introduction
Angina is chest discomfort caused by myocardial ischemia without necrosis, further qualified by its precipitating factors, time course to relief, and clinical characteristics, such as pain radiation and quality. Typical angina may be triggered by increased activity (exercise, sexual activity), emotional stress (anger, fright, or stress), or cold, wind, and fever. The discomfort of exertional angina is relieved by rest within 1-5 min or more rapidly with sublingual nitroglycerin and attacks usually last from 2 to 10 min. Characteristically, there is heaviness or pressure retrosternally, with possible radiation to the ulnar aspect of the left arm, neck, jaw, midabdomen, right arm, or shoulders. The average frequency of angina attacks in patients is about 2 per week. Many patients voluntarily cut back their activities to avoid further episodes. Clinically, chronic stable angina (SA) is generally caused by one or more significant obstructive lesions in coronary arteries, defined as stenosis of >50% of the diameter of the left main coronary artery or stenosis of >70% of the diameter of a major epicardial vessel. Precipitating circumstances remain similar between episodes, thresholds may be predicted by patients, and relief patterns become known. Since stenoses are fixed, the angina is due to demand ischemia and seems to be the most common symptom in patients with coronary artery disease (CAD).
Almost 7 million Americans suffer and 400,000 new cases are added each year, resulting in very high economic burden estimated at 1.3% of the NHS budget in the UK and $75 billion in 2000 in the USA [1,2]. Interestingly, real-life data on clinical outcome in SA outside randomized controlled trials are lacking, and in recent clinical trials the annual mortality ranges from 0.9% to 2.9%. There is growing interest in the last 6 years on risk stratification in SA patients specifically; hence risk factor research inevitably followed this concept of individualization (Figures 1 and 2). assist in prognostication of patients with a clinical diagnosis of SA. The presence of any comorbidity, such as diabetes, the severity of angina, shorter duration of symptoms, left ventricular dysfunction, and ST changes on the resting ECG, independently predicted outcome. The predictive model involved these six characteristics to estimate the probability of death or AMI within the year after presentation with SA. This model was found to be simple and objective and allowed discrimination between an extremely low risk population (rate of death and nonfatal infarction per year, <0.5%) and patients at high risk over the one-year study period. Its predictive validity was comparable to older models and more importantly was relevant in real-life cases, in contrast with the highly selected populations reported in past randomized controlled studies. In this contemporary evaluation of the prognosis associated with SA, the incidence of death and myocardial infarction was 2.3/100 patient-years. These findings add to the existing published data by Rapsomaniki et al.
[4] on the CALIBER prognostic models, which incorporated real-life clinical characteristics highlighted by the 2012 ACCF/AHA [5] and the 2006 ESC guidelines [6] for the initial evaluation, such as deprivation, atrial fibrillation, cancer, liver disease, depression, anxiety, and haemoglobin, factors that have not previously been incorporated in prognostic models for stable CAD, hence making the outcome data clinically relevant. In line with the above is the data from the Swedish study group in SA [7], reporting that easily accessible clinical and demographic variables provide a good risk prediction in SA. These variables were age ( Impaired glucose tolerance and an elevated serum creatinine were found to be particularly important. In this review article, we try to broach into the majority of the novel biochemical (Table 1) and imaging risk factors related to SA, balancing disease-oriented evidence (DOE) as well as patient-oriented evidence that matters (POEM).
1.1. Pathophysiology. The inability of the coronary arteries to increase blood flow in response to increased cardiac metabolic demands is the baseline dysfunction in SA. Normally, coronary endothelium excretes nitric oxide (NO) from its cells as a response to physical activity or any other demanding cardiac effort. Atherosclerosis damages the endothelium and makes endothelial cells permeable to cholesterol as well as other harmful substances, resulting in dysfunctional NO release and atherosclerotic plaque formation. In patients with stable CAD, the process of atherosclerosis involves a fundamentally different histopathology compared with ACS or UA. In chronic stable CAD, we have the formation of a small lipid core with a very thick fibrous cap and a low proclivity to rupture, causing narrowing of the arterial lumen as time goes by and producing symptoms, whereas in ACS/UA the principal histopathologic picture is that of a large lipid core subtended by a thinned, inflamed cap, which harbors the high-risk or vulnerable plaque with a high proclivity for rupture. When these plaques rupture or suffer "fissuring, " clot formation takes over (less in stable CAD, more in ACS/UA) with the usual acute ischemic consequences. The type of exposed substrates to circulation plays a major role in thrombosis formation, as platelets adhere more to exposed collagen and not to foam cells (as in SA). It has been recognized that myocardial ischemia results from an imbalance between myocardial energy supply, from insufficient sources of oxygen and substrate (glucose, free fatty acids), and myocardial oxygen demand. Usually this is simply referred to as an imbalance between myocardial oxygen supply and demand, but it should be clear that substrate supply, utilization, and enzymatic activities, along with other variables involved in metabolism and mitochondrial function, play a major role in the pathogenesis of myocardial ischemia in SA and ACS and during reperfusion ischemic injury. Many of the global relationships and positive feedback loops relating to the inequality of myocardial oxygen supply and demand have not changed in many years, although molecular, electrophysiological, conceptual, and technological advances have been changed considerably. Myocardial energy imbalance is central to all ischemic syndromes: SA, AMI, and cardiogenic shock. The variables determining myocardial oxygen supply are altered by negative feedback loops from complications of poor left ventricular function. Those factors affecting IL-6 levels were correlated with severe LAD stenosis (P < 0.001) and higher angiographic Gensini score (P < 0.001) in SA patients.
MPO Liang et al., 2009 [43] No significant difference between the control (24.2 ± 5.7 g/L) and SA groups (26.3 ± 4.8 g/L). MPO levels in patients with ACS (93.6 ± 20.3 g/L) were significantly higher than in patients with SA and the healthy control subjects (P < 0.05).
SDF-1; CXCL-12 Stellos et al., 2011 [58] No correlation of SDF-1 with any biochemical parameter (except an inverse correlation with cholesterol levels, P = 0.035), either in the whole study population or in the SA group. No statistical difference in SDF-1 levels between NSTEMI and SA groups.
PCT
Sinning et al., 2011 [67] Increased PCT levels in ACS group than in SA group (P for trend was P < 0.0001). Increased PCT levels at baseline were related to higher cardiovascular mortality (P = 0.00018) and higher cardiovascular event rate (P = 0.026). Also, independently related to future cardiovascular death myocardial oxygen demand (heart rate, afterload, preload, and contractility) are altered by positive feedback loops from those events perpetuating systemic features. An increase in left ventricular end-diastolic pressure (LV-EDP) or volume (LV-EDV) increases preload according to Laplace's Law. Both negative feedback on oxygen supply and positive feedback on oxygen demand tend to increase the inequality between the two and may jeopardize poorly perfused myocardial tissue ( Figure 3). When ischemia progresses beyond the reversible stage of angina and myocardial necrosis follows, well-known hemodynamic, metabolic, and mechanical sequelae may occur.
Current Use of Circulating Biomarkers in CAD.
During the past decades, various types of serum marker levels were widely used in the risk management of CAD. Mainly, these were markers of myocardial necrosis, such as aspartate transaminase in the 1950s, creatinine kinase (CK) in the 1960s, CK-MB in the 1970s, and troponins in the 1980s, primarily used as diagnostic tests with high negative and positive predictive value. Cardiac troponins are a clear example in clinical medicine where urgent clinical decision and marker measurement are closely related. Although a vast variety of other markers are routinely checked among patients with CAD, their true clinical use in terms of decision-making is not clear. As an example, serum creatinine has been estimated among people with suspected CAD for decades, but only in the last decade has its potential prognostic value been considered.
In patients with SA, circulating biomarkers have been recommended as potentially useful in risk stratification. As an example, the Centers for Disease Control/American Heart Association statement for health-care professionals recommended that one biomarker among SA patients (C-reactive protein, CRP) may be useful as an independent prognostic marker. On the other hand, there is a variability observed between clinicians and centers in which biomarkers are evaluated among SA patients, and only anecdotal evidence exists for biomarker use in common everyday clinical practice other than clinical studies.
There are various possible pathophysiologic mechanisms by which these markers may interfere with prognosis in SA patients, but this is of secondary importance taking into account the urgent clinical decision-making. The primary issue is to understand, if possible, each responsible mechanism of risk prediction and secondarily which marker is better.
High Sensitivity C-Reactive Protein (hs-CRP).
In older men and women, elevated CRP has been associated with an increased 10-year risk of CAD, regardless of the presence or absence of other common cardiac risk factors [8,9]. A single CRP measurement has been shown to provide information beyond conventional risk assessment, especially in intermediate-Framingham-risk men and high-Framinghamrisk women. Elevated hs-CRP has been previously related to the amount of necrotic core in the culprit lesion in SA patients. In a study by Kubo et al. [10], the percentage of necrotic core was significantly greater in the elevated hs-CRP group compared with the normal hs-CRP group (20 ± 9 versus 16 ± 8%, = 0.014). The percentage of necrotic core was positively correlated with the serum hs-CRP level ( = 0.20, = 0.037). Further studies are needed to determine risk prediction ability of this marker, with clearer description of the study population and variable adjustment for simple clinical risk factors, such as age, sex, smoking habits, diabetes, obesity, and lipid panel abnormalities.
Growth and Differentiation Factor-(GDF-) 15.
It is a cytokine involved in cell-differentiation and embryogenesis and belongs to the superfamily of proteins called "transforming growth factor-beta family" along with activins and inhibins [11]. Normally, GDF-15 shows high expression in placental tissue and a very low expression in normal tissue. However, GDF-15 levels are notably increased in various stress conditions, including ACS [12][13][14]. In addition, there is a sense that GDF-15 levels might reflect unique additional information about cardiac risk in general other than just increased inflammatory-induced protein activity. This is supported by data showing that GDF-15 correlates positively with body mass index (BMI) and also relates independently with CRP and NT-proBNP regarding ACS populations [12,15].
A large-scale study regarding the use of GDF-15 levels in SA patients published by Schaub et al. [16] showed that when circulating serum GDF-15 levels measurement was added to a clinical risk predictive model regarding CAD mortality, the predictive accuracy improved significantly (from AUC = 0.74 to AUC = 0.85, = 0.005). In a subgroup of 757 SA patients, GDF-15 levels remained independently associated with mortality, even when adjusted for left ventricular ejection fraction (LVEF) ( < 0.001). In a recently published, prospective, international multicenter study, GDF-15, high-sensitivity cardiac troponin T (hs-cTnT), and B-type natriuretic peptide (BNP) were measured in 646 random patients presenting with acute chest pain to the emergency department. In this study, GDF-15 predicted all-cause mortality independently of and more accurately than hs-cTnT (AUC 0.85 (95% CI 0.81-0.90) versus 0.77 (95% CI 0.72-0.83), = 0.002) and BNP (AUC 0.75, 95% CI 0.68-0.82, = 0.007) but did not seem to help in earlier AMI diagnosis [17].
Our suggestion is that these findings, albeit novel and useful, have to be validated by more studies and also different researchers, in a multicenter basis, because most of the available data has been reported by the same research group.
Neopterin.
Neopterin is a marker of macrophage activation, atherosclerotic plaque progression, fibrous cap disruption, and intracoronary thrombus formation. It is a pteridine derivative and a byproduct of the guanosine triphosphatebiopterin pathway. Neopterin has been studied in the concept of discovering a connection between the inflammatory process and left ventricular (LV) function, as depicted by left ventricular ejection fraction (LVEF) [18]. In recent published data regarding SA patients, increased neopterin levels showed inverse correlation with LVEF values and high neopterin levels were found to be an independent predictive factor for LV dysfunction (LVEF < 45%) (OR 8.52, CI 95% 1.10-65.64; = 0.040). Receiver operating characteristic analysis for neopterin showed an AUC of 0.736 (CI 95% 0.59-0.87, < 0.009) for prediction of LV dysfunction [19] concluding that neopterin could be of clinical value for risk stratification in these patients.
2.1.4. Interleukin-6 (IL-6). Interleukin-6 is a 22-27 kD glycoprotein secreted by activated monocytes, vascular smooth muscle cells, and adipose tissue and acts as both an inflammatory and anti-inflammatory cytokine in response to a stressful insult of any kind such as trauma, infection, and burns. Inflammation has been accepted to play a role at all stages of atherosclerotic CAD including progression and rupture of the plaque [20,21]. Additionally, the discovery that cytokine production is elevated not only in ACS but also in patients suffering from SA may indicate prolonged duration of inflammatory processes in vascular wall [22].
This cytokine is studied in relationship between other biomarkers and conventional risk factors in order to assess its clinical value. In a recent study including 34 patients with SA, levels of IL-6 were correlated with severe stenosis of the left anterior descending artery (LAD) and a higher Gensini score (as an objective score of CAD severity). Interestingly, when patient groups were compared, STEMI and NSTEMI groups had significantly higher IL-6 levels than the SA group ( = 0.002; = 0.005, resp.). The sensitivity and specificity for IL-6 as a CAD prediction marker were 46% and 86%, respectively, which led the investigators to conclude that the use of IL-6 levels alone could be useful in ruling out CAD [23]. In other studies, higher IL-6 levels were found in patients who had already experienced UA when compared with patients with SA [24-26]. In the PRIME study [27], IL-6 levels showed their value for predicting SA or ACS over a 5-year followup. To our knowledge, this was the first population-based observational study comparing systemic inflammatory mediators in predicting SA in a previously healthy population.
Larger studies combining objective coronary angiographic parameters and histologic findings may be helpful in evaluating the use of IL-6 as risk predictor.
Interleukin-10 (IL-10).
Interleukin-10 is not a new member in ACS research, but there is growing controversial literature regarding its prognostic value. This cytokine is mainly expressed in monocytes and type 2 T helper cells (T H 2), mast cells, CD4+CD25+Foxp3+ regulatory T cells, and a certain subset of activated T cells and B cells. Recent published data in Nature Medicine showed that IL-10 can also be produced by monocytes upon programmed-death ligand (PD-L1, PD-L2) triggering in these cells [28]. The existing experimental and human data suggests that the PD-1/PD-L1 and PD-L2 pathways play a key role in controlling the immune response of the proatherogenic T cell immunity, associated with the pro-and anti-inflammatory process [29][30][31][32]. More specifically, the expression of PD-1 and PD-L1 is significantly downregulated on T cells and myeloid dendritic cells (mDCs) in CAD patients compared to healthy individuals [31]. In a 6 Disease Markers prospective study with 5-year followup, elevated baseline IL-10 levels were found to be an independent predictor of longterm adverse cardiovascular outcomes in ACS patients [33].
Myeloperoxidase (MPO).
It is a 150 kD peroxidase enzyme stored in azurophilic granules of the neutrophil, secreted at sites of inflammation, interfering in the pathway of cell oxidation, and has a well-documented role in atherosclerotic disease, in terms of plaque progression and vulnerability, along with matrix metalloproteinases (MMPs) [34][35][36][37]. In culprit coronary lesions of SA patients, MPO-producing cells were found to be lower in concentration and less frequent, compared with ACS patients [38][39][40][41][42][43].
In a 3,000 patients' population study, high levels of MPO were an independent predictive risk factor for developing CAD in healthy individuals (OR for the highest quartile of MPO 1.36, 95% CI 1.07-1.73) [37]. In addition, in a different study, MPO did not show significant difference between the control (24.2 ± 5.7 g/L) and SA groups (26.3 ± 4.8 g/L), but plasma MPO levels in patients with ACS (93.6± 20.3 g/L) were significantly higher than in patients with stable angina and the healthy control subjects ( < 0.05) [44]. Furthermore, in a recent study, there was no significant difference in serum MPO concentrations between patients with SA and controls. Additionally, in the same study, serum MPO levels were significantly higher in AMI and UA patients compared with SA (both < 0.001), but there was no difference between AMI and UA. At followup, the mean MPO concentrations had significantly decreased in patients with SA ( = 0.008), UA ( < 0.001), or AMI ( < 0.001) and controls ( < 0.001). These findings are in contrast to data showing increased concentration of plasma MPO levels in patients with SA or ACS or in some cases no difference between SA and controls [39,[45][46][47].
Direct comparison of MPO levels between studies is obtrusive, because the sampling and laboratory assays for MPO levels seem to differ. In conclusion, this data suggests that MPO is a powerful marker of acute coronary inflammation and also a strong mediator for neutrophil activation. As research groups remain in controversy, we need more data to integrate the use of MPO in everyday clinical practice.
Interleukin-17 (IL-17).
Interleukin-17 is a 155-amino acid protein that is a disulfide-linked, homodimeric, secreted glycoprotein with a molecular mass of 35 kD. It is a potent mediator in delayed-type reactions by increasing chemokine production in various tissues to recruit monocytes and neutrophils to the site of inflammation. Interestingly, IL-17 bears no resemblance to any other known proteins or structural domains [48,49].
The role of IL-17 in SA or CAD remains under investigation. It is established that Th17 cells producing IL-17 are involved in the pathogenesis of atherosclerosis inducing vascular endothelial cell apoptosis, but the exact pathway is not clear [50][51][52][53][54]. The hypothesis, which is supported by limited data, is that IL-17 is secreted late on the inflammatory cascade, along with MPO, and attracts adhesion molecules (i.e., intercellular adhesion molecule (ICAM)) which are involved in ACS and have a role in coronary inflammation [50,55].
In a small population study [44], IL-17 levels were compared among patients with ACS and no statistical difference was found between the SA and the control group (2.3 ± 0.38 pg/mL versus 2.2 ± 0.22 pg/mL, resp.). The important finding in this study was the correlation between plasma MPO and IL-17 levels in all study participants ( 2 = 0.9110, < 0.05), supporting the hypothesis that IL-17, as MPO, is a powerful indicator of acute coronary inflammation. Factor-1 (SDF-1; CXCL-12). The stromal cell-derived factor-1 (SDF-1) is a small cytokine that belongs to the larger family of intecrines, chemokines that can be classified into two subgroups, the CC and the CXC family, with SDF-1 belonging to the latter. It is secreted in response to any vascular injury or ischemia and regulates recruitment of CXCR4+ cells on the vascular wall and there is evidence for its crucial role in tissue regeneration and revascularization, reflecting a possible cardioprotective effect after myocardial infarction in vivo [56][57][58].
Stromal Cell-Derived
When SDF-1 was compared with classic cardiovascular risk factors such as arterial hypertension, diabetes, smoking, or hyperlipidemia, there was no association found and no correlation with any biochemical parameter (except an inverse correlation with cholesterol levels, = 0.035), either in the whole study population or in the SA group, was found [59]. Additionally, there was no statistical difference in SDF-1 levels between the NSTEMI and the SA group. In a recent study regarding the expression of SDF-1 in nonvalvular paroxysmal or permanent atrial fibrillation, patients with SA had an impaired expression of SDF-1 compared with patients with ACS [59], which is in line with previously reported findings by Stellos et al. [60], showing increased platelet-bound-SDF-1 in patients with SA and paroxysmal atrial fibrillation (AF), compared to patients on sinus rhythm or persistent/permanent AF ( < 0.05 for both), and patients with ACS presented with enhanced platelet-bound-SDF-1 compared with SA.
Based on currently available data, SDF-1 can discriminate SA from ACS in the presence of nonvalvular arrhythmias, but not SA from acute ischaemic episodes per se, when serum levels are being measured.
Procalcitonin (PCT).
Procalcitonin is a peptide precursor of calcitonin, composed of 116 amino acids and produced by parafollicular cells (C cells) of the thyroid gland and by the neuroendocrine cells of the lung, intestine, and liver. It is a well-established biomarker in critically ill patients, in terms of predicting mortality, sepsis, and septic shock development, distinguishing bacterial from nonbacterial infections and being helpful in reducing unnecessary antibiotic therapy [61,62]. In CAD, inflammatory response and ischemic damage can lead to PCT production, which is supported by data implicating PCT as a novel biomarker for AMI [63]. Moreover, PCT has previously demonstrated good correlation with the extent of atherosclerosis and has been associated with an Disease Markers 7 adverse outcome [64][65][66]. For SA, its utility is investigated only in recent years.
Recently [67], PCT was evaluated in a total of 1,300 subjects with SA, among a large cohort of CAD patients. Patients with ACS had increased PCT levels compared to the SA group (0.016 (0.011/0.027) ng/mL versus 0.014 (0.009/0.014) ng/mL; trend < 0.0001). There was an association of significantly increased PCT levels and classical risk factors, such as male sex ( < 0.0001), diabetes ( < 0.0001), and BMI > 30 ( < 0.0001). In terms of mortality, increased PCT levels at baseline were related to higher cardiovascular mortality ( = 0.00018) and higher cardiovascular event rate ( = 0.026) and also independently related to future cardiovascular death (HR: 1.34; 95% CI: 1.08-1.65; = 0.0070) when adjusted for clinical variables. On the other hand, when PCT was adjusted for CRP, its association with mortality was lost.
Serum PCT levels might be a representative marker for the patients' inflammatory status and could be used for risk stratification in CAD, but there are few available data regarding SA.
Fetuin-A.
Fetuin-A has been recognized as an antiinflammatory cytokine and modulator in the atherosclerotic process [68]. Its role in cardiovascular disease has been previously investigated, in a cohort from the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam Study [69], and linked to an increased risk of AMI (as well as stroke) in patients with elevated fetuin-A serum levels. In a study by Bilgir et al. [70], fetuin-A levels have been found decreased in SA patients presenting with chest pain, compared to controls, but higher than in patients with AMI. As far as AMI outcomes are concerned, an increased fetuin-A in serum has been associated with an excellent survival rate (NPV = 97% overall) [71] even in high-risk populations, suggesting a sound pathogenetic role in the ischaemic event.
2.1.11. Lipoprotein-Associated Phospholipase 2 (Lp-PLA 2 ). This 50 kDa protein is a phospholipase A 2 enzyme that is encoded by the PLA 2 G 7 gene. It belongs to the family of platelet-activating factor acetylhydrolases, known to participate in atherogenic process, notably in complex plaques [72][73][74].
There is growing data regarding the positive correlation of Lp-PLA 2 levels and cardiovascular risk. In the West of Scotland Coronary Prevention Study (WOSCOPS), almost 6,600 hyperlipidemic middle-aged males were followed up for 5 years and inflammatory markers were measured. The strongest predictor of an adverse cardiovascular outcome was Lp-PLA 2 , independently from traditional markers such as CRP (relative risk of 1 SD increase = 1.18, 95% CI: 1.05-1.33, = 0.005) [75][76][77]. Regarding ACS, in the PEACE trial, Serruys et al. showed that in patients with stable CAD elevated Lp-PLA 2 and hs-CRP levels were significant predictors of acute coronary syndromes ( < 0.005 and 0.001, resp.). In addition, Lp-PLA 2 was the only significant predictor for coronary revascularization during followup [78]. In a very recent study by Ikonomidis et al. [79] that evaluated 111 angiographically confirmed stable CAD patients, Lp-PLA 2 was positively associated with carotid intima-media thickness (CIMT), and in the multivariate analysis Lp-PLA2 was an independent determinant of reactive hyperemia using fingertip peripheral arterial tonometry (RHI-PAT), coronary flow reserve (CFR), CIMT, and pulse wave velocity (PWV) in a model including age, sex, smoking, diabetes, dyslipidemia, and hypertension ( < 0.05 for all vascular markers). During a 3-year followup, Lp-PLA2, RHI-PAT, and CFR were independent predictors of cardiac events in this CAD cohort. Overall, elevated Lp-PLA2 concentration was related to endothelial dysfunction, carotid atherosclerosis, impaired CFR, increased arterial stiffness, and adverse outcomes in stable CAD. These findings suggest that the prognostic role of Lp-PLA2 in chronic CAD can be proved helpful in clinical practice. Moreover, Lp-PLA 2 has been recently promoted as a novel therapeutic target [79,80]. When darapladib, the specific inhibitor of Lp-PLA 2 , was added to statin therapy in patients with known CHD, there was a reduction in inflammatory markers such as CRP and IL-6, indicating a synergistic effect in inflammation amelioration. In a study by Galis and Khatri [81], darapladib was evaluated for its effect on the vascular wall, in patients with proven CAD by angiography. In a dose of 160 mg daily, darapladib decreased the necrotic core expansion significantly (−0.5 ± 13.9 mm 3 ; = 0.71 in the darapladib arm). Currently, two largescale ongoing trials will try to show a beneficial effect of Lp-PLA 2 inhibition (STABILITY and SOLID-TIMI 52) and therefore depict a new therapeutic target in patients with CAD. Mortality outcomes from these cohorts will show the need for a new drug or the need for more laboratory and clinical research on the field.
The levels of MMPs have been consequently evaluated in different CAD patients, including SA and ACS. In a recent study, levels of both MMP-2 and MMP-9 were significantly higher in patients with ACS compared to SA or healthy controls with normal coronary arteriography, which might indicate that the release of these two MMPs is related to the pathophysiology of ACS only [90]. Additionally, in another study [91], levels of MMP-8 and MMP-9 in plasma did not correlate with any common risk factor, such as waist circumference or smoking, but were highly correlated to MPO (both 2 = 0.80, < 0.001). In the same study, neutrophils of SA patients released more MMP-9 in response to IL-8 than controls. In agreement with a number of previous studies [92,93], there were no significant differences in circulating levels of MMP-9 between SA patients and controls. Interestingly, plasma levels of MMP-8 differ between SA patients and controls which is in contrast with previous studies [94,95] that have shown raised plasma MMP-8 in SA patients.
Disease Markers
In conclusion, since the neutrophil release of MMP-9 is thought to be an early marker of neutrophil activation, these findings may depict a persistent neutrophil activation in SA patients but not clarify MMPs value in risk stratification. (TIMP). They are the main regulators of matrix metalloproteinase activity and compromise a family of four protease inhibitors, TIMP-1, TIMP-2, TIMP-3, and TIMP-4. The balance between TIMPs and MMPs is thought to be decisive for plaque stability. Interestingly, reduced amounts of TIMP-1 and TIMP-2 (the main endogenous regulators of MMP-8 and MMP-9 activity) have been reported in unstable atherosclerotic lesions compared to stable atherosclerotic lesions [96].
Tissue Inhibitors of Metalloproteinase
There is very limited and also controversial data regarding SA patients, with a few clinical studies reporting increased plasma levels of TIMP-1 in SA patients [97], while others show levels similar to healthy subjects [92]. Likewise, the clinical impact of circulating TIMP-2 levels has been conflicting. Therefore, so far we can only theorize about the effects of high levels of TIMPs in SA. Their potential implications remain to be clarified in future studies.
C-Terminal Provasopressin (Copeptin).
Copeptin is the C-terminal of provasopressin, composed of 39 amino acids and secreted from neurohypophysis in response to stimuli (hemodynamic or osmotic type). It has been recently proposed by several study groups as an early marker of AMI risk stratification and prognosis in chronic heart failure [98][99][100][101][102][103][104][105][106]. There are few available data about copeptin and its prognostic value in SA patients.
In a large cath lab cohort (2,700 patients; SA group = 1,384) [107], copeptin was evaluated for its prognostic value regarding morbidity and mortality. Interestingly, patients with a family history of CAD had significantly higher copeptin baseline levels ( = 0.0141). A Kaplan-Meier analysis showed that patients with increased copeptin levels (serum level ≥ 20.4 pmol/L) suffered more events of the combined primary endpoint and of all-cause death alone at 90 days, compared to patients with lower levels. However, despite the promising data, we note that the primary endpoint of this study was a combined adverse outcome endpoint, which is of limited value compared with a mortality outcome alone.
In short, copeptin may be a useful prognostic tool for the prediction of major adverse cardiovascular events such as AMI, stroke, and all-cause mortality in CAD patients, but these findings cannot be extrapolated in SA. Further studies should investigate copeptin exclusively in SA patients and the optimal cutoff value.
MicroRNAs.
MicroRNAs (also known as miRs or miR-NAs) are RNAs of a non-coding molecule approximately 25-NT-long, that negatively regulate gene expression by binding to 39 untranslated regions of targeted messenger RNAs [108]. They have been found to be involved in many biological processes, from cellular differentiation, proliferation [109,110], cell death, apoptosis [111,112], and synaptic plasticity [113] to immunity [114] and cardiovascular development [115], as well as cardiovascular diseases [116,117].
In a study by Latronico and Condorelli [118] that examined circulating miRNA expression in plasma of patients with CAD compared to controls, aiming to identify novel biomarkers in SA and UA, ROC curve analyses showed a good diagnostic potential (AUC ≥ 0.85) for miR-1, miR-126, and miR-483-5p in patients with SA. Moreover, cluster analysis showed that the combination of miR-1, miR-126, and miR-485-3p in SA correctly classified patients compared with controls, with an efficiency of ≥87%. Interestingly, none of the investigated combinations of miRNAs was able to reliably discriminate SA from UA patients. Moreover, the study showed that specific plasmatic miRNA signatures have the potential to accurately discriminate patients with angiographically documented CAD from matched controls.
Further studies are needed, with larger populations, to address the potential utility of plasmatic miRNAs as biomarkers of SA, as well as to clarify the mechanisms of their release in serum.
Imaging.
Compared to a simple exercise electrocardiography testing (XECG), perfusion imaging with 201 Thallium or 99m Technetium-sestamibi raises sensitivity, but prognostic value is less established [119]. Perfusion imaging is particularly useful when the resting ECG is abnormal, specifically in women because of false positive results on XECG [120]. In symptomatic patients who have had prior revascularization, reversible areas of ischemia may be quantified and localized to specific areas of the myocardium [121]. 99m Technetiumsestamibi produces better and faster images with decreased attenuation, has lower sensitivity for viable myocardium than 201 Thallium, and is more expensive. Increased lung uptake after testing, left ventricular dilation, and multiple perfusion defects are associated with left main coronary or severe multivessel disease and should be followed by coronary angiography. Patients with two or more perfusion defects and ventricular dysfunction are also candidates for angiography. Perfusion imaging as a single test has been found to lower rates of hospital admission by up to 52% while evaluating acute chest pain in the emergency department [122]. A number of differences in plaque density between patients with SA and AMI have been reported using optical coherence tomography (OCT) imaging to assess plaque vulnerability [123]. Survivors of AMI who were undergoing percutaneous interventions and those with stable lesions in multiple vessels had OCT images performed of infarct-related lesions or lesions slated for revascularization, as well as non-infarctrelated and nontarget lesions. Images from OCT study found intracoronary thrombus in all patients suffering an AMI, and none in patients with SA. A ruptured coronary plaque was identified in 77% of AMI patients, but only in 7% of SA patients, suggesting differences in plaque pathophysiology.
With the increasing use of hybrid single photon emission computed tomography (SPECT/CT) devices, myocardial perfusion imaging (MPI) and coronary artery calcium (CAC) Disease Markers 9 scoring can be easily combined and performed in a single session. However, in symptomatic patients with a very high CAC score, it is still unclear if MPI will provide any benefit in terms of the resulting implications for treatment as well as short-term prognosis. In a recent study by Prescott et al. [124] in patients with a low/intermediate risk of a coronary event with suspected but unconfirmed CAD and a high CAC score (≥1,000), ischaemia on MPI was a strong predictor for coronary revascularization. However, nonischaemic MPI does not exclude revascularization, and patients with persisting complaints should be considered for invasive angiography (OR 13.1; 95% CI: 7.1-24.3; < 0.001). In the same study, patients who underwent scanning with the cadmium-zinctelluride (CZT) gamma camera had fewer equivocal findings in SPECT (6% versus 18%, = 0.002) and more often underwent stress only imaging (30% versus 16%, = 0.0018).
In the ongoing iPOWER study [125], which was conducted to determine whether routine assessment of coronary microvascular dysfunction (CMD) in women with angina and no obstructive coronary artery disease is feasible and can identify women at risk, Doppler study and measurement of CFR of the left anterior descending artery was found to be feasible. At the end of this study that will recruit approximately 2,000 patients, more clear conclusions regarding the prognostic value of routine noninvasive techniques for microvascular function are expected.
Conclusions
There is growing evidence suggesting that the use of a fixed marker panel combined with classical, easy, accessible data prior to testing may augment prognostic strength and accuracy in clinical practice [4,7,128,129]. Based on current data, we believe that using a biomarker combination for risk stratification or mortality prediction, and adding an imaging study with incremental value over clinical predictors, stress testing, and coronary calcification such as CCTA, rather than a stand-alone marker, is the right clinical direction in SA.
Moreover, taking into account the very low reported mortality rates in SA, in the era of new available pharmacological agents (i.e., ranolazine) [130], a systematic evaluation of concrete combination of biomarkers and imaging studies in a long-term, large-scale basis is deemed important in order to select patients that would benefit. Future research on microR-NAs seems promising in clarifying the vague area of the inflammatory cascade in SA, bridging the pathophysiologic and clinical findings in order to predict outcomes effectively.
With the emergence of novel, sensitive biomarkers of inflammation, myocyte necrosis, vascular damage, and hemodynamic stress, it is becoming possible to characterize noninvasively the participation of different contributors in any individual patient. Although there are several novel biomarkers proposed for risk stratification in SA and our understanding for the specific biochemical role of each marker in the disease is still limited, it is plausible that elevated levels of circulating markers of inflammation reflect an intensification of focal inflammatory processes that destabilize vulnerable plaques.
Cardiac serum and imaging biomarkers provide a convenient and noninvasive means in clinical practice, in order to gain insights into the underlying causes and consequences of stable CAD that mediate the risk of recurrent or new events and may be targets for specific therapeutic interventions.
|
v3-fos-license
|
2019-06-07T17:14:00.000Z
|
2019-06-07T00:00:00.000
|
174801628
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nzjmath.org/index.php/NZJMATH/article/download/189/69",
"pdf_hash": "a9f9f5f8548f4b9d99d05c990a8af15838652257",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:805",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "110771e60a3b9a8ba4709118ed94b02e132d8481",
"year": 2021
}
|
pes2o/s2orc
|
Embedding Heegaard Decompositions
A smooth embedding of a closed $3$-manifold $M$ in $\mathbb{R}^4$ may generically be composed with projection to the fourth coordinate to determine a Morse function on $M$ and hence a Heegaard splitting $M=X\cup_\Sigma Y$. However, starting with a Heegaard splitting, we find an obstruction coming from the geometry of the curve complex $C(\Sigma)$ to realizing a corresponding embedding $M\hookrightarrow \mathbb{R}^4$.
Introduction
The tools for showing that a closed 3-manifold M does not smoothly embed in R 4 seem rather primitive. There does not seem to be any M which embeds in some integral homology 4-sphere Σ 4 and is known not to embed in R 4 . But tools for this would be highly desirable since Budney and Burton's [1, §4] 3-manifold survey turns up four examples of closed 3-manifolds M embedding in a homotopy 4-sphere for which no embedding in R 4 is known. This raises the possibility that 3-manifold embeddings could be used to detect exotic structures.
Our goal here is to find a bridge between the rich subject of surface dynamics, e.g. the mapping class group, and embeddability in the hope that the coordinate structure of R 4 will make an essential appearance. We are partially successful. We find a robust connection between the very coarse "handlebody metric" d H on the curve complex C(Σ) recently studied in [6] and embeddability of the corresponding Heegaard decompositions. Said another way, we turn d H into an obstruction to embedding f : M → R 4 with the fourth coordinate already prescribed. We have known this result since 2012 but have been unable to accomplish the obvious next step: find some residual obstruction which is independent of the fixed 4 th coordinate function, i.e. define a true embedding obstruction based on surface dynamics. Since [6] has now appeared in print and our argument provides a simple application, possibly with yet unrealized potential, we present it here.
Assume that we have been given a Morse function f : M → R, how can we show it is not the fourth coordinate of any embedding in R 4 ? Actually, the obstruction we formulate will not make use of the entire data of a Morse function but merely the Heegaard decomposition M = X ∪ Σ Y canonically determined (up to isotopy) by f . The handlebody X is a neighborhood of the ascending manifolds of critical points of index = 2 and 3 and Y is a neighborhood of the descending manifold of critical points of index 0 and 1.
Our chief tool is the curve complex C(Σ) [4] and its metrics. The vertices of C(Σ) are isotopy classes of simple closed curves (sccs) on Σ, and Hempel [5] introduced the metric d, the largest metric where disjoint sccs have distance = 1. We will exploit a much coarser metric d H , "handlebody distance," defined as the largest metric where any two sccs bounding disks in the same handlebody H, ∂H = Σ, have distance = 1. This distance is easily seen to be quasi-isometric to the "electrification metric" d E recently introduced in [6], where it is proved that diam d E (C(Σ)) = ∞, for genus Σ ≥ 2. So for us, a key fact will be (1.1) Let D(X) ⊂ C(Σ) be the set of sccs bounding disks in the handlebody X = ∂Σ. We prove the following: We actually supply two proofs (using slightly different techniques, yielding slightly different constants, and supporting different generalizations).
Ambient Morse Theory
This section recalls an "ambient" version of Morse theory appropriate to embedded submanifolds M 3 → R 4 .
When speaking of an embedding f : M → R 4 , we will feel free to change the target space to S 4 or S 3 × R, by adding or deleting a point, without renaming the map or calling other attention to the change.
Suppose we are given a codimension-1 smooth embedding g : M 3 → R 4 of a closed connected 3-manifold with fourth coordinate g 4 = f . Using only elementary general position argument, one constructs an isotopy from g to g with the critical points of the fourth coordinate g 4 occurring in order (higher index critical points take larger values). Such Morse functions will be called ordered.
Some choices are made in this procedure which could influence the order of handle attachments but not the diffeomorphism type of the Heegaard decomposition M = X ∪ Σ Y , where Y = ∪ (handles of index = 0, 1) and X = ∪ (handles of index = 2, 3). The topology of X (Y ) relative to Σ is, however, independent of any choices. Proof. Both lemmas are proven by sliding Σ up and down the gradient lines of the Morse function until the first collapse of an essential scc in Σ is observed.
Two Distance Estimates
Proof of Theorem 3.1. We may compactify horizontal slices to consider the embedding of M as into S 3 × R. M = X ∪ Σ Y and we may assume and, since lens spaces do not embed in R 4 , without loss of generality that g(Σ) ≥ 2. Note that Σ ⊂ S 3 may not be a Heegaard surface for S 3 but by Lemma 2.1 must contain at least one scc of D(X) and one scc of D(Y ) (which might be identical) which compress into S 3 .
Notation. S 3 = A ∪ Σ B and V ⊂ A and W ⊂ B are maximal compression bodies for Σ in A and B, respectively (see [2] for the definition of a compression body).
At most one of V and W is a product collar (since S 3 is non-Haken). By Fox [3,Main Theorem (1) Maher and Schleimer studied [6] a metric d E which is clearly quasi-isometric to d H . d E is defined by adding a new vertex h to C(Σ) for each handlebody H, ∂H = Σ, and adjoining a length = 1 edges between each scc in D(H) and h. They prove that for genus Σ > 2, diam d E = ∞. Thus we have: Since diam d H (C(Σ)) = ∞, Theorems 3.1 and 3.2 obstruct certain-in some sense, most-Morse functions, or Heegaard decompositions M = X ∪ Σ Y from arising via embedding M → N × R for N = S 3 , or more generally N closed, reducible, and containing no incompressible surface.
|
v3-fos-license
|
2021-04-24T05:39:40.581Z
|
2021-04-16T00:00:00.000
|
234858354
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://joss.theoj.org/papers/10.21105/joss.02969.pdf",
"pdf_hash": "80b38db1a4306cf60fffcd6e26bb4fcfe9403572",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:806",
"s2fieldsofstudy": [
"Computer Science",
"Physics"
],
"sha1": "80b38db1a4306cf60fffcd6e26bb4fcfe9403572",
"year": 2021
}
|
pes2o/s2orc
|
Visualization of Multi-Dimensional Data – The data-slicer Package
From prehistoric cave-wall paintings to the invention of print and most recently electronic harddisks, human data storage capacity has evolved tremendously. Information/data is of great value and hence associated with innovation and technological progress. This is especially true in analytical disciplines i.e. all sciences ranging from physics to psychology and medicine. In observational sciences, most measurement techniques undergo steady improvements in acquisition time and resolution. As a result the sheer data throughput is continually increasing. Examples of techniques where the typical data output has moved from 1D to 3D in the past few decades are shown in Figure 1.
Statement of Need
From prehistoric cave-wall paintings to the invention of print and most recently electronic harddisks, human data storage capacity has evolved tremendously. Information/data is of great value and hence associated with innovation and technological progress. This is especially true in analytical disciplines i.e. all sciences ranging from physics to psychology and medicine. In observational sciences, most measurement techniques undergo steady improvements in acquisition time and resolution. As a result the sheer data throughput is continually increasing. Examples of techniques where the typical data output has moved from 1D to 3D in the past few decades are shown in Figure 1.
More data is always welcome. However, in many disciplines human digestion of these large amounts of data has now become the bottleneck. In many fields, for example those working at large scale synchrotron facilities where the duration of the experiment is limited, scientists require a means of quick data inspection and carrying out a fast preliminary analysis in order to take decisions on the course of the experiment. Many of the existing powerful and versatile visualization tools (Ahrens et al., 2005;Fedorov et al., 2012;Mayavi, n.d.;VisIt, n.d.) are not well suited for this purpose and cannot easily support the specific and often changing needs of a given discipline. The result is that each community ends up developing their own solutions to the problem of quick data visualization and inspection, e.g. (Lass et al., 2020;Stansbury & Lanzara, 2020). However, since these implementations are usually intertwined and entangled with the community-specific parts, such solutions are typically not transferrable across different disciplines or experimental methodologies.
We have developed PIT and the data-slicer package to account for these needs: offering tools for fast live visualization of data at a general scope that can easily be adjusted and fine tuned for more specific problems. Summary data-slicer is a python package that contains several functions and classes which provide modular Qt (Riverbank Computing, 2020; The Qt Company, 2020) widgets, tools and utilities for the visualization of three-dimensional (3D) datasets. These building blocks can be combined freely to create new applications. Some of these building blocks are used within the package to form a graphical user interface (GUI) for 3D data visualization and manipulation: the Python Image Tool (PIT). The relation between different elements of the package and external software is schematically depicted in Figure 2.
PIT
PIT consists of a number of dynamic plot figures which allow browsing through 3D data by quickly selecting slices of variable thickness from the data cube and further cutting them up arbitrarily. Two core features of PIT should be explicitly mentioned. The first is the ability to create slices of the 3D data cube along arbitrary angles quickly. This is facilitated on the GUI side through a simple draggable line to select the slice direction. The superior speed of this operation is enabled by the use of optimized functions. The second feature worth mentioning is the inclusion of an ipython console which is aware of the loaded data as well as of all GUI elements. The console immediately enables users to run any analysis routine suitable to their respective needs. This includes running python commands in a workflow familiar to pylab or Jupyter (The Jupyter Project, 2020) notebook users but also loading or directly running scripts into or from the console, using ipython's line magic functions %load and %run respectively. Effectively, this design is central in empowering users to accomplish any task imaginableas long as it is possible to achieve with python.
Plugins
It is clear that it can get complicated and tedious to run certain types of data processing or analysis from the ipython console, as described in the previous paragraph. For such cases, PIT provides an additional level of customizability and control through its plugin system. Plugins are regular python packages that can be loaded from within PIT and enhance it with new functionality. A plugin can interact with all elements in PIT via the same interfaces as can be done through the built-in ipython console. Creating a plugin therefore requires little programming skills and no further knowledge of the inner workings of PIT. In this manner, different communities of users can create and share their field-specific plugins which allow them to customize PIT to their needs.
As an example, we mention the ds-arpes-plugin (Kramer, 2020), which provides basic functionalities for loading of ARPES datasets and handles for typical analysis functions, customized and taylored to be used from within PIT.
Modularity and widgets
PIT is constructed in a modular fashion, constituting of different widgets that have been combined together to make a useful, ready to use tool. However, different applications may require slightly different functionalities, and the setup in PIT may not be optimal. The data-slicer package makes all the used widgets in PIT and some additional ones independently available to the user. These widgets can be arbitrarily combined to create customized applications in a relatively simple manner.
In summary, the data-slicer package solves the problem of offering the right scope -neither too specialized that it can only be used by a narrow community nor too bloated such that it becomes hard to do specific operations -by offering a variety of methods for users of varying backgrounds to get exactly the tools they need. On the first and most general level, PIT offers a ready-to-use GUI for quick 3D data visualization without any need of programmatic user input. Users can satisfy their more specific needs either through use of the console or by implementing a plugin, which can both be accomplished with little programming knowledge. On the last, most specific level users can use and arrange the building blocks contained in the package to create completely new applications or embed PIT or other parts of the data-slicer package into an existing application.
|
v3-fos-license
|
2020-07-13T14:20:09.434Z
|
2020-07-13T00:00:00.000
|
220491013
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40618-020-01350-1.pdf",
"pdf_hash": "c1ac9448555f13dd36f3308ff2ecd958f00bf852",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:807",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"sha1": "c1ac9448555f13dd36f3308ff2ecd958f00bf852",
"year": 2020
}
|
pes2o/s2orc
|
Addressing male sexual and reproductive health in the wake of COVID-19 outbreak
Purpose The COVID-19 pandemic, caused by the SARS-CoV-2, represents an unprecedented challenge for healthcare. COVID-19 features a state of hyperinflammation resulting in a “cytokine storm”, which leads to severe complications, such as the development of micro-thrombosis and disseminated intravascular coagulation (DIC). Despite isolation measures, the number of affected patients is growing daily: as of June 12th, over 7.5 million cases have been confirmed worldwide, with more than 420,000 global deaths. Over 3.5 million patients have recovered from COVID-19; although this number is increasing by the day, great attention should be directed towards the possible long-term outcomes of the disease. Despite being a trivial matter for patients in intensive care units (ICUs), erectile dysfunction (ED) is a likely consequence of COVID-19 for survivors, and considering the high transmissibility of the infection and the higher contagion rates among elderly men, a worrying phenomenon for a large part of affected patients. Methods A literature research on the possible mechanisms involved in the development of ED in COVID-19 survivors was performed. Results Endothelial dysfunction, subclinical hypogonadism, psychological distress and impaired pulmonary hemodynamics all contribute to the potential onset of ED. Additionally, COVID-19 might exacerbate cardiovascular conditions; therefore, further increasing the risk of ED. Testicular function in COVID-19 patients requires careful investigation for the unclear association with testosterone deficiency and the possible consequences for reproductive health. Treatment with phosphodiesterase-5 (PDE5) inhibitors might be beneficial for both COVID-19 and ED. Conclusion COVID-19 survivors might develop sexual and reproductive health issues. Andrological assessment and tailored treatments should be considered in the follow-up.
Introduction
The global outbreak of coronavirus disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) represents an unprecedented challenge for healthcare. Despite social distancing and isolation measures, the number of affected patients is growing daily. Hyperinflammation and immunosuppression are prominently featured in COVID-19 [1,2], resulting in a cytokine storm [3] ultimately leading to development of micro-thrombosis and disseminated intravascular coagulation (DIC). This cytokine storm is strongly associated with the development of interstitial pneumonia (IP) [4]; however, although lungs are the primarily targeted organs, the cardiovascular system is globally affected. Evidence in this regard supports the
Testosterone and COVID-19: friend or foe?
It is well established that ACE2 is the entry point for the SARS-CoV-2 in host cells [12]. In males, adult Leydig cells express this enzyme, therefore, suggesting that testicular damage can occur following infection [18]. Testicular damage in COVID-19 might, therefore, induce a state of hypogonadism as proven by decreased testosterone-to-LH ratio in patients with COVID-19, suggestive of impaired steroidogenesis resulting from subclinical testicular dysfunction [19,20]. Post-mortem examinations of testicular tissue from 12 COVID-19 patients showed significantly reduced Leydig cells, as well as edema and inflammation in the interstitium [21]. A recent report on 31 male COVID-19 patients in Italy identified that some patients developed hypergonadotropic hypogonadism following the onset of the disease [22]. In the same study, lower levels of serum testosterone (total and free) acted as predictors of poor prognosis in SARS-CoV-2 men [22]. Whether this state of hypogonadism is permanent or temporary is a question so far left unanswered. Testosterone acts as a modulator for endothelial function [23] and suppresses inflammation by increasing levels of anti-inflammatory cytokines (such as IL-10) and reducing levels of pro-inflammatory cytokines such as TNF-α, IL-6 and IL-1β [24]. It can, therefore, be hypothesized that suppression of testosterone levels might be one of the reasons for the large difference in terms of mortality and hospitalization rate between males and females and might also explain why SARS-CoV-2 most commonly infects old men.
On the other hand, androgens seem to play a pivotal role in COVID-19 by promoting the transcription of the transmembrane protease, serine 2 (TMPRSS2) gene. The encoded protein primes the spike protein of SARS-CoV-2, therefore, impairing antibody response and facilitating the fusion between the virus and the host cells [25]. This hypothesis could explain the higher prevalence of COVID-19 in men, although it would fail to explain the rationale for the higher mortality rates, as well as the worse clinical outcomes, for elderly patients.
Additional studies would, therefore, be needed to understand whether testosterone treatment might be beneficial or deleterious for the clinical course of the disease. However, independently of whether testosterone is a friend or foe for COVID-19, it should be acknowledged that the testis is a target for SARS-CoV-2 and the possibility for long-lasting consequences on the endocrine function exists, even for recovered patients.
COVID-19 and the endothelium
Solid evidence accumulated in the last decades support the notion that erectile function is an excellent surrogate marker of systemic health in general, and vascular performance in particular [26], sharing plenty of risk factors with cardiovascular disease. This is described by the equation ED = ED (endothelial dysfunction equals erectile dysfunction, and vice versa) [27]. Vascular integrity is necessary for erectile function [28], and vascular damage associated with COVID-19 is likely to affect the fragile vascular bed of the penis, resulting in impaired erectile function [5,7]. COVID-19 features a state of hyperinflammation promoted by TNF-α, IL-6 and IL-1β [29]; the same inflammatory cytokines have been associated with clinical progression of sexual dysfunction [30]. It is worth noticing that the pro-inflammatory cytokines are also closely tied to testosterone levels: as previously stated, hypogonadal patients have higher concentrations of TNF-α, IL-6 and IL-1β as a result of impaired suppression. This ultimately worsens the endothelial dysfunction, further impairing erectile function. However, whether testosterone replacement therapy (TRT) would improve endothelial function is still debated, while largely beneficial in the treatment of hypogonadal men, TRT has known harmful effects if inappropriately prescribed [31], and a meta-analysis study did not find any conclusive evidence of a potentially therapeutic effect of testosterone administration, neither acute nor chronic, on endothelial function [32]. While erection is-of course-a trivial matter for patients in Intensive Care Units (ICUs), there is reason to suspect that impaired vascular function might persist in COVID-19 survivors and even become a public health issue in the next few months. Moreover, given that erectile function is a predictor of heart disease [33,34], investigating whether COVID-19 patients develop ED might also be a good surrogate marker of general cardiovascular function, improving patient care and quality of life.
A COVID eclipse of the heart: potential for cardiovascular burden
Besides the effects on endothelium, SARS-CoV-2 infection can also dramatically affect the heart and exacerbate underlying cardiovascular conditions. Reports of myocarditis in COVID-19 patients have piled up in the last months [35][36][37]; similarly, arrhythmias and acute cardiovascular events have been described in other coronavirus and influenza epidemics [38][39][40] and are likely to be expected for SARS-CoV-2 as well [41]. COVID-19 survivors are, therefore, more likely to develop severe cardiovascular consequences. However, treatment is not exempt from possible side effects, among which sexual dysfunctions are remarkably common. Drugs such as β-blockers and antihypertensive agents, routinely used in COVID-19 patients, have the potential to impair sexual function [41]; therefore, both the cardiovascular consequences and their treatment might ease progression from subclinical to a clinically overt ED [42,43].
Additionally, as stated in the III Princeton Consensus Panel [50], sexual activity should be delayed until the cardiac condition has been stabilized in high-risk patients. Such patients include those with uncontrolled hypertension, recent myocardial infarction or high-risk arrhythmia, which are all conditions closely associated with COVID-19 [51].
Reproductive health and COVID-19
Another reason for worry lies in the reported testicular damage from COVID-19 infection. In fact, ACE2 is highly expressed in the testis, suggesting the possibility of testicular infection since the early stage of the disease [52]. Being expressed in both Sertoli and Leydig cells [18,53], ACE2 plays key roles in spermatogenesis and in the regulation of steroidogenesis. Due to the involvement of Sertoli cells, reproductive function might similarly be affected. Additionally, ACE2 is also expressed by spermatogonia, therefore, increasing the risk of SARS-CoV-2 presence in seminal fluid [54,55].
Studies investigating the presence of SARS-CoV-2 in seminal fluid have, for the largest part, found no evidence of the virus [56][57][58][59]. However, as other studies have shown different results [60], the topic of reproductive health is still largely debated. In post-mortem examinations, seminiferous tubular injury was reported despite no evidence of the virus in the testis [21]. Identification of SARS-CoV-2 in semen is of the utmost importance, as sperm cryopreservation is an undelayable necessity for many men, such as those who are about to start gonadotoxic treatments [61]. In Italy, cryopreservation procedures for oncological patients have continued during the COVID-19 pandemic, using utmost care to limit the risk of transmission; for non-oncological patients, the prospects of biological parenthood could be compromised as a consequence of delaying diagnostic semen analysis and sperm banking [62]. At the beginning of the pandemic, discontinuation of reproductive care except was recommended by international societies for reproductive medicine, with only the most urgent cases allowed; as containment and safety strategies have mitigated the spread of the disease, several centers for assisted reproductive technology have resumed their activity, although with very precise rules for operators [63,64].
Further studies should, therefore, be designed with the aim to clarify this point, above all among "COVID-19 asymptomatic" men requiring assisted reproductive technology (ART).
The psychological burden of COVID-19
Increased rates of post-traumatic stress disorder (PTSD), depression and anxiety are expected in the general population, and even more in COVID-19 survivors, following the pandemic [65][66][67][68]. A parallel can be drawn between the psychological consequences of COVID-19 and those coming from similar disasters, such as the 9/11 attacks [69] or earthquakes [70], and similar short-and long-term treatment strategies are, therefore, needed to provide adequate care. Confinement and the illness in itself are both causes of stress; while only a minority of individuals might be more vulnerable to psychological trauma, there is no doubt that most people would experience some degree of emotional distress following isolation, social distancing, loss of relatives and friends, difficulties in securing medications, as well as the obvious economic consequences of lockdown. Sexual activity is closely associated with mental and psychological health; it is, therefore, unsurprising that sexual desire and frequency have declined in both genders during this pandemic [71,72]. There is, therefore, reason to suspect that psychological suffering might exacerbate pre-existing subclinical sexual dysfunctions [73]. Additionally, the potential for SARS-CoV-2 transmission by kissing might lead to increased distress in the couple [74], with the resulting negative effects on sexual health and on couple dynamics. Additionally, the hypogonadal state reported in COVID-19 could lead to a significant worsening in sexual desire and mood [75,76].
Pulmonary fibrosis and the effects of hypoxia
It has been suggested, with on the basis of interesting evidence, that there could be substantial fibrotic consequences following SARS-CoV-2 infection [77,78]. Indeed, pulmonary fibrosis is a well-acknowledged consequence of acute respiratory distress syndrome (ARDS), with further evidence coming from survivors of the 2003 SARS outbreak (caused by the SARS-CoV) [79,80]. Pulmonary fibrosis impairs the physiologic lung mechanisms, reducing the pulmonary gas exchange and, therefore, impairing oxygen saturation [81,82]; functional disability has been proven in ARDS patients several years after the acute phase of the disease [83]. There is currently no evidence concerning the possible long-term impairment of lung function following SARS-CoV-2 infection; however, considering the scale of the current pandemic and the similarities between SARS-CoV and SARS-CoV-2 [84], there is sufficient reason to suspect a high rate of fibrotic lung function abnormalities in COVID-19 survivors. In such patients, the impaired oxygen saturation could impair erectile function; some evidence in support comes from animal models [85,86] as well as from clinical reports [87,88]. From a pathophysiological standpoint, this is hardly surprising, as oxygen is one of the substrates required for the synthesis of nitric oxide (NO) by the enzyme NO synthase, whose activity is severely blunted in hypoxia [87].
Phosphodiesterase-5 inhibitors in COVID-19
Phosphodiesterase-5 (PDE-5) belongs to the PDE superfamily of enzymes, the last step of the NO/cGMP/PDE pathway and is one of the key elements in drug treatment of ED. NO activates guanylate cyclase in responsive cells, such as endothelial cells, resulting in increased concentrations of the second messenger cGMP (cyclic guanosine monophosphate), which in turn induces relaxation of smooth muscle. PDE acts downstream and reduces effects of cGMP by catalyzing its degradation: PDE inhibitors prevent degradation of cGMP, resulting in prolonged or enhanced action [89].
PDE-5 is highly expressed in vascular smooth muscle cells [90], and, at high concentrations, in those of the penile corpora cavernosa [91]; therefore, thanks to their action and due to their high affinity for the specific type 5 isoform [92], PDE-5 inhibitors have been approved for their use in treatment of ED since 1998. However, a growing body of evidence has also proven their usefulness as therapeutic agents in different conditions due to their anti-inflammatory and antioxidant actions, as reported in diabetes [93], hypertension and chronic kidney disease [94]. Sildenafil, the first PDE-5 inhibitor approved for the treatment of ED following its serendipitous discovery [95], has also been investigated as a treatment for COVID-19 patients; indeed, Sildenafil improves pulmonary hemodynamics, as shown in idiopathic pulmonary fibrosis [96], by reducing vascular resistance and remodeling in the pulmonary circulation [97]. Additionally, by inhibiting neointimal formation and platelet aggregation, sildenafil also might prove beneficial in regard to the risk of vascular injury and thrombotic complications in COVID-19 patients [98]. Evidence from new trials will prove fundamental to assess the clinical benefits of PDE-5 inhibition on the overall burden of COVID-19 [99].
Conclusions
In conclusion, there is quite enough reason to suspect that male sexual and reproductive health could be affected in the survivors, by the sequelae of the COVID-19, both in the short and long terms (Fig. 1). Erectile function, as a surrogate marker of cardiovascular/pulmonary health, could also become extremely valuable as a quick and inexpensive firstline assessment of the pulmonary and cardiovascular complications for COVID-19 survivors. In this regard, evidence coming from diagnostic procedures, such as penile colordoppler ultrasound [43] and hypothalamic-pituitary-testicular axis evaluation [100], will be necessary to assess the extent to which COVID-19 has been able to impair erectile, and finally vascular, function, the former being an efficient predictor of complete restitutio ad integrum. Additionally, tailored psychological interventions would be necessary to adequately support patients who develop sexual dysfunction consequently to the containment measures.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Ethical approval This manuscript is a review of the literature and does not contain original research either on animal or on human subjects. Informed consent For this type of study, informed consent is not required.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
|
v3-fos-license
|
2024-06-22T15:45:16.134Z
|
2024-06-18T00:00:00.000
|
270648812
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1911-8074/17/6/250/pdf?version=1718777169",
"pdf_hash": "44bbf412ba7bc0e50c5c17234e665fae8ed4ecde",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:813",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"sha1": "ee476492e9247e0f2f324ff5ae39ae6425d9cc02",
"year": 2024
}
|
pes2o/s2orc
|
Does Corporate Social Responsibility Create Value in Acquisitions? Evidence from the German Market
: This paper examines the impact of a firm’s Corporate Social Responsibility (CSR) level on abnormal stock returns around merger and acquisitions (M&A) announcements. Using a sample of transactions announced by German DAX-listed acquirers from 2017 and 2022, the analysis assesses whether CSR creates value for acquiring firms’ shareholders and offers a comprehensive discussion of potential factors supporting or opposing this notion. Our study seeks to fill a notable gap in the German literature on the relationship between CSR performance and abnormal stock returns surrounding M&A announcements. Building upon prior research findings in the US and in an international sample, our investigation focuses on the German market. Employing event study methodol-o gy, our results indicate that M&A transactions of German -listed acquirers did not yield significant negative or positive cumulative abnormal returns for event windows of 3 and 11 days. Furthermore, based on multiple linear regression, no evidence was found that CSR positively or negatively influ-enced abnormal stock returns following M&A announcements, suggesting that positive and negative effects potentially offset each other. The outcomes of our research have important implications for investors, as CSR initiatives do not serve as a positive trading signal, guaranteeing excess returns, which contrasts findings from previous studies in other developed countries. For managers, it is essential to concentrate on factors beyond CSR performance, such as synergies and fit. Finally, both managers and investors should not view CSR as a shareholder value-enhancing short-term investment but as an integral component of fostering sustainable business development
Introduction
Many companies have increased their investments in CSR as part of their strategic orientation or in response to growing stakeholder requirements regarding their social and environmental impact.McWilliams and Siegel (2001) depicted CSR as actions that go beyond the interests of a company and result in social good.According to Hill et al. (2007), CSR is defined as economic, legal, moral, and philanthropic actions influencing relevant stakeholders.The growing importance of CSR for companies' operations is best reflected in the increasing demand for socially responsible investment (SRI) funds from an investor's perspective (Cellier and Chollet 2016).According to a report by the Forum for Sustainable Investments, the total volume of SRI funds at the end of 2022 amounted to EUR 475.8 billion in Germany, up 16% compared to the previous year (FNG Marktbericht Deutschland 2023).
As a result of the increasing importance of CSR in managerial and strategic practice, corporate social activities have been put on the research agenda.In particular, the question of whether CSR leads to value creation or, on the contrary, destroys value for shareholders is under much debate (Aktas et al. 2011;Broadstock et al. 2020;Tampakoudis et al. 2021;Cho et al. 2021).Mergers and acquisitions (M&A) provide an interesting setting to investigate this question, as M&A transactions can be considered one of the most important managerial decisions (Jost et al. 2022), having a substantial impact on shareholder wealth (Teti et al. 2022).
Despite an increasing number of studies, however, CSR within the context of M&A is still underrepresented (Gonzàlez-Torres et al. 2020;Meglio 2020).Two noteworthy studies, Deng et al. (2013) and, more recently, Zhang et al. (2022), have indicated a positive relationship between CSR performance and value creation in the context of Mergers and Acquisitions (M&A) for the US and selected international markets, respectively.CSR performance could be expected to impact M&A success due to its potential to foster positive stakeholder reactions that, for example, facilitate integration and reduce associated costs, which can serve as an optimistic signal to investors (Zhang et al. 2022).CSR could, however, also be perceived negatively for M&A success (Meckl and Theuerkorn 2015), as, for instance, high CSR standards could make integrating the target more difficult and costly, and a focus on CSR efforts could potentially divert management's attention from a rigorous M&A execution process.
While there is isolated evidence for the US and selected international markets, to the best of our knowledge, there is no study examining this relationship within the German market, which ranks among the world's largest economic centers.Furthermore, existing studies lack a rigorous discussion of potential positive and negative effects of CSR performance on investor perception in an M&A context.Thus, this paper aims to determine the reasons for as well as the extent to which short-term shareholder value creation through M&A is attributable to an acquiring company's level of CSR.More specifically, our empirical analysis focuses on the influence of CSR performance on abnormal stock returns of an acquiring company around M&A announcements.
Consequently, the following two research questions have been developed to conduct an empirical analysis:
What are potential positive and negative effects of CSR investments by the acquirer in an
M&A surrounding?2. Does an acquiring company's level of CSR influence abnormal stock returns in the context of M&A announcements in Germany?
We use environmental, social, and governance (ESG) scores to measure a firm's CSR performance and thus follow previous research and existing investment practices (Krishnamurti et al. 2020;Tampakoudis et al. 2021;Barros et al. 2022;Damtoft et al. 2024).Investors, analysts, and fund managers consider CSR more as company-specific branding.Therefore, they use ESG criteria to analyze securities to obtain quantifiable sustainability measures.In their view, ESG criteria allow for fully capturing corporate sustainability's holistic nature in a standardized and comparable framework (Walz 2019).
The main objective of our analysis is to discuss and examine the relationship between a company's level of CSR and short-term potential value creation, measured by the adjustment of stock prices following the M&A announcement.To investigate this, we used event study methodology to quantify the financial impact of 231 M&A transactions by German acquirers on short-term value creation from 2017 to 2022.Furthermore, we divided our observation period into two subsamples, pre-COVID-19 and interim-COVID-19, to control for effects from the pandemic.Our results show that the M&A transactions of German-listed acquirers yielded a slightly negative CAR for event windows of 3 and 11 days, respectively, but in both research settings, without statistical significance.Applying multiple regression analysis, we found no evidence that CSR performance positively influences abnormal stock returns following M&A announcements for the covered period.We conclude that, for the German market, we cannot confirm the positive findings observed in the US market by Deng et al. (2013) and in a broader international sample by Zhang et al. (2022), casting doubt on the generalizability of their results.Supported by a comprehensive discussion of potential effects, we conclude that the positive and negative effects of CSR on the value perception of investors around M&A announcements seem to offset each other.Consequently, we suggest that decision-makers should not rely heavily on CSR-related measures as value-adding investments.
Our study contributes to the existing literature in two ways.First, only a few empirical studies analyze the relationship between CSR and the financial performance of German-listed firms (Fischer and Sawczyn 2013;Velte 2017).No empirical research has examined the relationship between CSR and value creation in German M&A using announcement effects.Thus, our work aims to fill this gap in the existing literature for developed countries and complements the existing research by providing results from German data.Second, our analysis provides a comprehensive discussion of both the potential positive and negative effects of CSR investments by acquirers in an M&A context.While existing studies have typically adopted a single perspective or argument for hypothesizing an impact on corporate transactions (e.g., Zhang et al. 2022), we have not found any study that has comprehensively discussed both views, providing an overview of the entire setting in the CSR-M&A context and arguing for a counterbalancing net effect.
Focusing on Germany as a representative developed country in the EU and using a sample period from 2017 to 2022, our study examined a region and a period subsequent to the introduction of a new EU-wide regulation on CSR reporting, the Non-Financial Reporting Directive (NFRD) (Directive 2014/95/EU).The aim of this regulation is to require larger companies in the EU to report environmental, social, employee, human rights, anticorruption, and bribery matters in order to promote better CSR performance.Implementing the NFRD was regarded as a significant step towards greater business transparency.The new directive became effective in 2018, reporting for the first time for the 2017 fiscal year.
The remainder of this paper is structured as follows.Section 2 introduces the theoretical framework and the hypothesis's development and reviews the extant literature on the association between CSR and financial performance.Section 3 describes the sample selection and methodological approach.Empirical results, their interpretation, and the illustration of the limitations will be presented in Section 4. Finally, Section 5 concludes the analysis.
Theoretical Framework and Related Literature
Research examining the relationship between CSR and a company's financial performance draws on several theoretical arguments centered around the balance of interests among various stakeholder groups, resulting in an ambivalent view.
The so-called shareholder expense view suggests that managers may act in the interests of other stakeholders, neglecting the interests of shareholders (Deng et al. 2013).Scholars following this view suggest a negative association between CSR investments and a firm's financial performance and argue that engaging in socially responsible activities results in additional costs, representing a waste of valuable resources (Cho et al. 2021).Tampakoudis and Anagnostopoulou (2020) explained that when CSR investments are perceived by investors as an agency cost caused by managers, they have a negative impact on financial performance.These costs put firms at a competitive disadvantage compared to other, less socially responsible firms (McGuire et al. 1988).For instance, Waddock and Graves (1997) considered the decision to invest in pollution control equipment when other firms do not as an example of a cost-incurring action.The added costs may also result from making extensive charitable contributions, promoting community development plans, maintaining plants in economically depressed locations, and establishing environmental protection procedures (McGuire et al. 1988).In addition, concern for social responsibility may limit a firm's strategic alternatives.A company, for instance, may refrain from certain product lines, such as weapons or pesticides, and avoid plant relocations and investment opportunities in specific locations (McGuire et al. 1988).The suggestion of a negative link between CSR and financial performance aligns with Friedman's doctrine and other neoclassical economists' arguments.Friedman (1970) claimed that firms have minimal ethical obligations besides following the law and maximizing profits.Hence, firms are not obliged to invest in socially responsible activities as they mainly incur costs and, thus, reduce profits and shareholder wealth.According to Friedman (1970), managers use CSR as a private benefit for pursuing their careers or other hidden agendas at the expense of shareholder wealth.In doing so, they create a conflict of interest (Jiao 2010).Moreover, Zahid et al. (2022a) also emphasized a negative relationship between a company's ESG activities and corporate financial performance as measured by return on assets (ROA) for a dataset consisting of Western European companies.They also stated that the inverse effect between ESG and financial performance is stronger when a Big Four accounting company mandates a company.
In sharp contrast, other scholars analyzing the relationship between CSR and financial performance have argued for a positive relationship.The so-called stakeholder value maximization view postulates a positive effect of CSR activities on shareholder value (Deng et al. 2013;Cho et al. 2021).Based on corporate stakeholder theory, it is argued that CSR activities positively affect shareholder wealth because focusing on the interests of external stakeholders increases their willingness to support a company's operation (Freeman 1984;Deng et al. 2013).Corporate stakeholder theory relies on the contract theory and the theory of the firm by Coase (1937).It was later expanded by Cornell and Shapiro (1987) and Hill and Jones (1992).According to these theories, the value of a firm depends not only on the cost of explicit claims but also on its implicit claims (McGuire et al. 1988).The firm is described as a nexus of contracts between shareholders and other stakeholders in which each group supplies the firm with critical resources or efforts (Deng et al. 2013).These contributions are received in exchange for claims outlined in explicit contracts (e.g., wage contracts, product warranties) or suggested in implicit contracts (e.g., promises of job security to employees and continued service to customers).At the same time, explicit contracts demonstrate full legal standing, whereas implicit contracts do not.
Consequently, firms can default on their implicit commitments without legal recourse from other stakeholders (Deng et al. 2013).Hence, the value of implicit contracts depends on other stakeholders' expectations about a firm honoring its commitments (Cornell and Shapiro 1987).Because firms that invest more in CSR appear to have a more substantial reputation for keeping their commitments, stakeholders of these firms are more likely to contribute resources and efforts to the firm (Aktas et al. 2011;Deng et al. 2013).As a result, they would be willing to accept less favorable explicit contracts than stakeholders of low CSR firms.Focusing on stakeholders' interests increases their willingness to support a firm's operations, which may increase shareholder wealth (Deng et al. 2013).In practice, firms that satisfy stakeholders' expectations and needs may benefit from increased sales (Ambec and Lanoie 2008), decreased costs (Porter and van der Linde 1995), reduced financial risk (Godfrey et al. 2009), and improved reputation (Brammer and Millington 2005).Thus, firms perceived as high in CSR may benefit from more low-cost implicit claims than other firms, potentially leading to better financial performance for these companies.In this context, Edmans (2011) also argues that, consistent with human capital-centered theories of the firm, employee satisfaction as one dimension of good CSR performance should be positively correlated with shareholder returns.
Moreover, a company's commitment to socially responsible activities may improve its standing with critical external stakeholders such as bankers, investors, and government officials.This may lead to additional economic benefits for the company.For example, Spicer (1978) reported that banks and other institutional investors acknowledge that social considerations play a substantial role in their investment decisions.Therefore, a high CSR commitment may facilitate a firm's access to sources of capital and reduce its cost of capital (El Ghoul et al. 2011;Cheng et al. 2014;Goss and Roberts 2011;Ye and Zhang 2011).
Other theoretical concepts are related to the direction of causality between CSR and financial performance.Waddock and Graves (1997) called them slack resources and good management theories.One view is that better financial performance enables firms to use slack resources for investments in their social performance (Waddock and Graves 1997).Hence, better financial performance predicts superior CSR performance if slack resources are allocated to firms' social activities.The good management theory, on the contrary, reverses this cause-effect relationship.It argues that good CSR performance leads to superior financial performance of the firm.As attention to CSR improves relationships with key stakeholder groups (e.g., employees and customers), better financial performance is achieved through greater stakeholder engagement and resource commitment.It manifests as increased sales or reduced costs (Waddock and Graves 1997), which is largely in line with the above-mentioned stakeholder value maximization view.A moderating effect on the part of the CEO and his characteristics between corporate financial performance and CSR performance also seems to be significant (Zahid et al. 2022b).This is, of course, relevant, as members of the board, particularly the CEO, are responsible for strategic decisions, which include M&A activities as well.Furthermore, based on the tournament theory, incentives can motivate CEOs to act more socially responsible, leading to higher CSR commitment of the company and positively affecting corporate social responsibility performance (Khan et al. 2022).
The latter argument also paves the way for linking M&A success with CSR performance.M&A deals are essential investment decisions that can substantially impact shareholder value.However, it could be unclear whether this effect is positive or negative against the background of the two competing theories, namely the shareholder expense view and the stakeholder value maximization view.We summarize the competing opinions in Figure 1 and elaborate on them in the following.According to the shareholder expense view, socially responsible activities incur additional costs and put the company at a competitive disadvantage, which could negatively affect the acquirer's financial performance.Companies heavily investing in CSR initiatives might face financial constraints, potentially limiting their ability to properly integrate the target, leading to negative perceptions among investors.
Furthermore, investors might anticipate a higher integration cost due to additional expenses for alignment and standardization caused by the acquirer's high CSR standards.For instance, extensive reporting requirements naturally entail time-intensive tasks, require significant employee capacity, and consequently result in high costs.
Finally, prioritizing CSR efforts could potentially divert management's attention from a rigorous M&A execution process.M&A transactions inherently demand substantial managerial involvement and oversight, often requiring exhaustive due diligence, strategic planning, and seamless integration efforts.These tasks necessitate a considerable allocation of resources, time, and expertise from top management.As M&A requires meticulous attention to detail and swift decision-making, any diversion of managerial focus towards CSR activities could potentially dilute the effectiveness of the M&A process, leading to negative investor perception.
On the contrary, following the stakeholder maximization view, investor perception of merger activities might be positive.M&A transactions often involve different groups of stakeholders whose approval or support is required for making a decision.Consistent with good management theory, companies with strong CSR performance tend to have better reputations and stakeholder relations.In an M&A situation, stakeholder interests may be damaged (Segal et al. 2021).However, positive relationships between management and stakeholders, such as customers, suppliers, and employees, can increase the likelihood of a positive reaction from stakeholders to the M&A announcement.Support from these stakeholders for a transaction can be a strong positive signal for investors and enhance the M&A announcement effect.
In addition, CSR can be used to assess the cultural fit between the purchasing and target companies.Since there is a high interdependence between a company's culture and its engagement with CSR, by evaluating CSR performance, companies can identify potential cultural differences or synergies between the two organizations (Meckl and Theuerkorn 2015).This can help facilitate the integration process and reduce post-merger integration costs.As a result, investors could anticipate a smoother integration and, consequently, factor the higher probability of M&A success into their reaction to the deal announcement.
Moreover, CSR can act as a risk management tool during M&A transactions and thus help reduce transaction costs.Companies can identify and mitigate potential ecological and social risks associated with the target company by considering CSR factors as part of the due diligence process.This can help reduce uncertainty for shareholders of the acquiring company and increase the chances of a successful M&A transaction (Gomes and Marsat 2018; Meckl and Theuerkorn 2015).
In conclusion, considering the two contrasting views, both positive as well as negative arguments hold validity.It thus seems reasonable to suggest that they could potentially outweigh each other, resulting in a net effect where neither a positive nor negative impact on investor perception of CSR activities in the context of M&A is observed.This leads us to the following hypothesis, which we test for the German market in this study: An acquiring company's level of CSR does neither positively nor negatively influence investor perception in the context of M&A announcements.
Consistent with the conflicting predictions of the contrasting theories, it is not surprising that the empirical results to date are also ambiguous and need further rigorous validation.Previous research has yielded mixed findings on the relationship between a firm's level of CSR and its financial performance.The extensive body of literature in this research field can be divided into two strands of empirical studies.The first type of study investigates the relationship between CSR and long-term financial performance (McWilliams and Siegel 2000).Long-term financial performance is usually determined by using accounting or market-based profitability measures.For instance, early works by Aupperle et al. (1985) did not observe any relationship between CSR and profitability.More specifically, a correlation between varying levels of social orientation and performance differences was not found (Aupperle et al. 1985).These findings are consistent with Arlow and Gannon's (1982) conclusion that research studies have not strongly supported a positive association between profitability and CSR.Several more recent studies have come to similar conclusions, finding no or rather negative relationships while using various financial performance measures and samples (Brammer et al. 2006;Makni et al. 2009;Lima et al. 2011;Van der Lann et al. 2008).
Other studies from this literature strand have indicated an opposing view, postulating a positive relationship between CSR and financial performance (Blanco et al. 2013;Godfrey et al. 2009;Jiao 2010;Tang et al. 2012;Wang and Choi 2013;Hou 2019).Awaysheh et al. (2020) found that best-in-class CSR firms have higher operating performance and relative valuations than their lower-performing counterparts.Jia (2020) found a positive relationship between corporate performance and CSR performance in the Chinese market, but this only holds for companies prioritizing customer value and other stakeholders over shareholders.
In addition to investigating the financial impacts, other studies have analyzed the direction of the relationship between firms' CSR levels and financial performance to make a statement about the causalities.For example, early works by McGuire et al. (1988) assessed financial performance using stock market returns and accounting-based measures.Their findings showed that a company's prior financial performance is more closely related to CSR than its subsequent performance (McGuire et al. 1988).Similarly, Waddock and Graves (1997) and Scholtens (2008) discussed this reverse causality problem and provided evidence that CSR performance positively correlates with prior financial performance.This suggests that it is not CSR performance that drives financial performance but that companies with above-average financial performance use their surplus of monetary or non-monetary resources to improve their CSR performance further (Makni et al. 2009;Fischer and Sawczyn 2013).Aktas et al. (2011), however, criticized that the question of the direction of causation has not been sufficiently clarified in the literature.
A second strand of empirical literature takes an alternative methodological approach.These studies recognize that analyzing the causes and drivers of financial performance is a complex task as numerous factors influence financials, and the level of CSR is only one of them.Consequently, these studies use event study methodology to focus on short-term financial performance to avoid blurring effects over the long term.On the other hand, even appropriate tools to identify an event's financial implications and avoid the investigated relationships are overshadowed by other effects over a more extended period.Consequently, these studies turn to M&A deals to investigate the link between CSR and financial performance.Furthermore, as M&A deals can be described as somewhat unanticipated events, the event study methodology is also suitable to mitigate the reverse causality problem of previous studies mentioned above (Deng et al. 2013;Cellier and Chollet 2016).Aktas et al. (2011) addressed this problem by analyzing the targets' CSR levels instead of the commonly used CSR levels of the acquirer.
The first study to employ an event study using announcement effects of acquirers in an M&A context to analyze the relationship between CSR and financial performance was conducted by Deng et al. (2013).In this context, the authors examined a large sample of mergers in the United States (US) between 1992 and 2007.The authors provided evidence that, compared to low-CSR acquirers, high-CSR acquirers realize higher stock returns following a merger announcement.Thus, their findings support the stakeholder value maximization view of stakeholder theory.A more recent study by Zhang et al. (2022) analyzed a broader international sample including 23 developed economies.They came to a similar conclusion when analyzing 1310 M&A transactions between 2002 and 2012.Their results showed that high CSR acquirers generally achieve positive abnormal announcement returns.The returns are, however, negative when the acquisitions are hostile (Zhang et al. 2022).
Moreover, Krishnamurti et al. (2020), as well as Tampakoudis and Anagnostopoulou (2020), provided additional studies for the US and the European market, respectively, documenting a positive relationship between companies' CSR level and value creation for their shareholders.However, these studies did not use announcement effects but focused on long-run stock returns and post-acquisition Tobin's Q. Krishnamurti et al. (2020) suggested that the value creation is primarily attributable to the low bid premiums that socially responsible firms pay for their targets.In a substream of this literature strand, a minority of studies have used the target's CSR performance instead of the acquirer's, which they justified with the directional causality problem.Aktas et al. ( 2011) analyzed target CSR levels in a sample of 109 transactions from 1997 to 2007 and concluded that acquirer abnormal returns are positively associated with the targets' social and environmental performance.Similarly, Cho et al. ( 2021) found that higher CSR performance of a target firm creates value for M&A bidders.Finally, Teti et al. (2022) found, for a small sample of 73 M&A deals in 20 different countries, that the market values positively affect the acquisition of a company that scores high in ESG.However, the individual ESG factors have different relevance in explaining the market reaction.In summary, concerning target CSR performance, there appears to be a consensus in the limited literature that higher-CSR targets represent value-enhancing investment opportunities for buyers.
However, contrarian evidence was provided by Meckl and Theuerkorn (2015) for acquirer CSR performance.The authors found no correlation between CSR and announcement returns and suggested that "a business case for CSR regarding M&As cannot be made" (Meckl and Theuerkorn 2015, p. 224).Likewise, Yen and André (2019) found no significant relationship between CSR levels and deal announcement effects for a sample of 23 emerging markets.They concluded that M&As depend mainly on investors' costbenefit concerns instead of CSR performance.A study by Fatemi et al. (2017) observed, for a Japanese sample, that the ESG performance of Japanese acquirers exerted no statistically significant influence on abnormal returns.Also, Li et al. (2019) found no effect from CSR on M&A announcement returns for a Chinese sample of 3500 firms.Furthermore, Tampakoudis et al. ( 2021) even found a significant negative value effect of ESG performance for the shareholders of 889 acquiring US firms.The authors argued that firms ignore the costbenefit criterion and overinvest in CSR, suggesting that the market considers sustainability activities too costly, especially during economic downturns.They concluded that the market rewards low-CSR-acquiring firms (Tampakoudis et al. 2021).
Regarding German-listed firms, empirical research is very limited.The few existing studies have followed the first literature stream that regressed CSR and accounting-based financial performance measures.Fischer and Sawczyn (2013) and Velte (2017) used German samples and found strong support for a significant positive interaction between CSR and financial performance.The findings of Fischer and Sawczyn (2013) pointed out that the positive link was also affected by the degree of companies' innovation.Similar to the results by Waddock and Graves (1997), the authors provided further support for a unidirectional, causal relationship between prior financial performance and CSR.
Moreover, Velte (2017) reported that CSR positively affected Return on Assets (ROA).However, there was no impact on Tobin's Q.When decomposing CSR into its underlying components, the governance performance dimension appeared to exert the most decisive influence on a firm's financial performance compared to the environmental and social performance dimensions (Velte 2017).Regarding the second strand of literature mentioned above in this field, no studies have yet analyzed M&A announcement effects to examine the relationship between CSR and value creation in Germany.
To conclude, an extensive body of theoretical frameworks and empirical studies present various notions and findings regarding the link between CSR and financial performance, but with ambiguous results.While some studies did not find positive relationships between CSR levels and financial performance, others have postulated positive associations, albeit with partially unclear causal directions.The meta-analyses (Orlitzky et al. 2003) and literature studies (Van Beurden and Gössling 2008) suggest a majority of positive relations between CSR and financial performance, but they do not include more recent research.
The contradictory findings in the literature regarding the impact of CSR on financial performance and, in particular, on M&A announcement effects appear to further reinforce the hypothesis stated above, suggesting that positive and negative effects could counterbalance each other.We speculate that these results ultimately mirror the conflicting perspectives of the stakeholder value maximization and the shareholder expense view.
Given the ongoing debate and the apparent absence of empirical studies focusing on the German market, this study sought to address the unresolved dilemma and complement the existing evidence with German data.To this end, we examined the relationship between CSR performance and M&A outcomes using a sample comprising M&A transactions in Germany spanning from 2017 to 2022.By conducting this investigation, our aim is to contribute to the existing literature by filling this significant gap in empirical research for the German market.
Data and Methodology
We chose DAX-listed firms as they account for around 80 percent of the total market capitalization of all listed companies in Germany, providing a representative view of the German market (Börse Frankfurt 2023).We measured the CSR engagement of a firm using ESG scores, aligning with prior studies that have evaluated a company's CSR engagement level through its ESG score (Deng et al. 2013;Velte 2017;Broadstock et al. 2020;Tampakoudis et al. 2021).ESG scores are considered objective assessments of a company's commitment to sustainable business practices and are common practice in CSR literature.The set of M&A transactions and companies' ESG scores were derived from the data provider FactSet.The FactSet database collects comprehensive information regarding each M&A transaction, including the announcement date, transaction value, and deal description.In addition, acquirer-related information, historical stock prices, and market index prices were retrieved.The ESG scores provided by FactSet are composed of five dimensions: environment, social capital, leadership and governance, human capital, and business model and innovation.The selection of control variables (industry, year, transaction value, listed target) followed the study conducted by Masulis et al. (2007).The target company status (public or private target) was included in the regression model because it appears to be a crucial driver of acquirer returns (Hazelkorn et al. 2004).In most cases, a target status of private was also responsible for a lack of ESG score data.As information on deal value was not disclosed for all transactions identified between 2017 and 2022, deals were classified into major deals (≥500 million euros) and minor deals (<500 million euros).This approach aligns with the study by Alexandridis et al. (2017), which provided evidence for more shareholder value creation among larger deals.The sample of transactions was selected following these criteria: 1.The initial sample included all M&A transactions sourced from FactSet, in which German DAX-listed companies acted as the acquirers in the transaction.The considered German firms have been part of the German index throughout the covered period from 1 January 2017 and 31 December 2022.2. Acquiring firms within the banking, financial services, and insurance sectors were excluded.The company identification used by the Frankfurt Stock Exchange was adopted to identify the company's sector.The reason to exclude firms that belong to the banking, financial services, and insurance sectors is because capital requirements and cash policies regulate them.These firms have considerably different capital structures with high liquidity and leverage.This makes them less comparable to the rest of the sample companies.The exclusion of financial intermediaries aligns with numerous previous studies (e.g., Deng et al. 2013; Mager and Meyer-Fackler 2017).
3. The M&A transactions must be announced between 1 January 2017 and 31 December 2022, and their deal status must be completed.4. The acquiring firm holds less than 50% before the transaction is announced.5. M&A transactions whose announcement dates and event windows interfered with each other were excluded due to a possible bias effect on the results.6. Complete information regarding stock returns, market betas, and ESG scores must be available for all companies in the FactSet datasheet, creating a homogeneous data sample.Occasionally, overlapping event windows were apparent in the dataset.Abnormal stock returns from overlapping event windows were not uncorrelated.The financial impact of one event will spill over to a second event and multiply the market response.Thus, these observations were dropped from the dataset to avoid distortion of the study results.
Following these selection criteria resulted in a final sample of 231 transactions by 17 German DAX-listed companies (see Table 2).For the regression analysis, the sample was further reduced to 56 M&A transactions as the parties involved in a deal do not usually disclose the transaction value, limiting the sample for this particular analysis.
Essentially, two methods exist in the current literature to measure value creation from M&A: event studies and accounting studies.The event study is a forward-looking, direct measure of value creation based on market returns.On the contrary, accounting studies are based on reported financial statements and measure returns through historical accounting figures.Thus, the disadvantage of this approach is the backward-looking nature of the variables used to compute stock returns.Additionally, previous studies on the relationship between CSR and firm value have suffered from a reverse causality problem (Waddock and Graves 1997;McWilliams and Siegel 2000).However, applying the event study methodology can potentially mitigate this, as M&A transactions are largely unanticipated events (Deng et al. 2013;Cellier and Chollet 2016;Teti et al. 2022).
Return event studies have captured the stock market reaction and, thus, investors' perception of a specific event.In the context of this paper, this is the M&A announcement.An event's financial impact is quantified in abnormal returns (AR).The AR is the difference between the actual realized return and the return without the M&A announcement (Wang et al. 2020).The analysis employs the market model by Brown and Warner (1985) for calculating the return without an M&A announcement.
Rt and Rmt denote the company-specific stock return and the market return, respectively.α and β are the two parameters determining the linear relationship between company-specific return and overall market return.The residual is the error term.In the market model, the expected value of is presumed to be 0. Therefore, the expected return of the stock for the event date is: The CAPM was applied as a prediction model for the market model by Brown and Warner (1985) in this paper: Rf indicates the risk-free rate, while βi and Rm represent the company's beta and the market return, respectively.Hence, the AR for each event date is: To quantify the impact of an event over a time period, daily abnormal stock returns were cumulated to obtain the cumulative abnormal return (CAR).Thus, CAR is the sum of AR over the event window, stretching from the day before the merger announcement t (−1) to the day after the merger announcement t (+1).
(5) More specifically, this paper computed the abnormal stock returns in three steps.First, the event date and the event window were defined.The exact event date for each transaction was the announcement date of the M&A deal.t0 denotes the event date.If the announcement occurred on a non-trading day, the following trading day was considered the event date.In line with Zhang et al. (2022), Hackbarth andMorellec (2008), and Alexandridis et al. ( 2017), an event window of 3 days [−1, +1] was incorporated in this study to examine the short-term value creation following the M&A announcement.Like Deng et al. ( 2013), a longer event window of 11 days was also applied to further test the robustness of the results and for comparability reasons.This event window included several days before the announcement due to the possible occurrence of information leakage and rumors.Furthermore, as the market reaction following the announcement might last several days, the days after the event date were also included to capture the short-term value creation (Aggarwal and Chen 1985).We used short-term event windows to accurately reflect the shareholder value creation from M&A deals in the short run (Andrade et al. 2001).Longer event windows that include more days are not optimal because they may increase the likelihood of noise and influence from other factors (MacKinlay 1997).Additionally, the error term in short-term studies is smaller, and the corresponding computation of abnormal stock returns is more accurate than in long-term studies (Kothari and Warner 2007).
The second step was to compute each transaction's expected return, AR, and CAR.Then, the normal return was estimated using the parameters of the CAPM.The risk-free rate was approximated using the yields of 10-year German government bonds.The yearly average values of their betas were employed to assess the systematic risk associated with individual companies.The DAX index's daily return served as the market proxy.We used an estimation window of 200 days for estimating the expected returns.
In the last step, several statistical tests and regression analyses were performed.To investigate the value effects of CSR, we analyzed the market's reactions to the ESG scores.We used the following regression equation: where CARi,t is the cumulative abnormal return of acquirer i on date t, ESGi,t is the ESG score for acquirer i, IND is the control variable Industry, YEAR is the control variable Year of the deal, TV is the control variable Transaction value for potential deal size effects, LT is the control variable Listed target (public or private), and ɛ is the random disturbance term.
t-tests were conducted to test whether the mean CAR for the two event windows of 3 and 11 days significantly differed from 0. In addition, the Wilcoxon test was used to examine the significance of the median CAR.
Before the regression analysis, several necessary assumptions were tested to ensure the model's validity.First, the regression model was tested for multicollinearity; we found no evidence for multicollinearity.The correlation among the independent variables was within the acceptable threshold of less than 0.7.Second, the sample was inspected for possible outliers by performing a Cook's Distance test.Computing Cook's Distance resulted in minimum and maximum values of 0.000 and 0.394, respectively.In addition, the Cook's Distance values were visually inspected via a scatter plot in which no data point appeared to be an extreme outlier.Subsequently, the Kolmogorov-Smirnov and Shapiro-Wilk tests were performed to analyze the residuals' normal distribution.Both tests assumed a normal distribution in their null hypothesis, which could be retained with p-values larger than 0.05.Both tests were insignificant and, thus, could not reject the null hypothesis.
Furthermore, a visual examination was undertaken.The data points followed approximately along the diagonal in the Q-Q plot.Similarly, the histogram indicates an approximate normal distribution.Based on the analytical and graphical indications, normally distributed residuals are assumed to exist.Afterward, the regression model was tested for signs of heteroskedasticity.Based on a visual inspection, it can be observed that the values were within −3 and 3 in the scatter plot of residuals.In addition, the data points were randomly distributed and did not appear in a specific pattern.The modified Breusch-Pagan test further investigated a potential heteroskedasticity's existence.The test was insignificant and provided no evidence for heteroscedasticity.Finally, the Durbin-Watson statistic was computed to test for possible first-order autocorrelation of the residuals.The output showed a value of 2.481, within the acceptable range between 1.5 and 2.5.Therefore, no autocorrelation was assumed to exist among the residuals.Consequently, we assumed a linear multiple regression could provide valid results based on these different test procedures.
Results and Discussion
Table 3 shows the mean and median CAR values for this study's two applied event windows.
Table 3. Means and medians of CAR for the selected event windows over different observation periods (in percent).Note: ** significant at the 1% level, * significant at the 5% level.
Observation Period and
Both event windows yielded a negative mean of −0.13% for CAR [−1, +1] and −0.25% for CAR [−5, +5] over the complete observation period.However, the results were not statistically significant.At the very least, it could be assumed that the stock market did not react positively in the short run to M&A transactions of DAX-listed acquirers in the sample.This is in line with the majority of existing studies on short-term value creation from developed economies that document mainly negative announcement effects (e.g., Healy et al. 1992;Andrade et al. 2001;Campa and Hernando 2004;Hazelkorn et al. 2004;Conn et al. 2005;Hackbarth and Morellec 2008;Alexandridis et al. 2012; for the German market: Mager and Meyer-Fackler 2017).Only recently, however, isolated studies have come to more positive results, such as Alexandridis et al. ( 2017), which found evidence of positive value effects for acquiring firm shareholders in the aftermath of the 2008 financial crisis.
In a similar vein, when examining the mean CAR for each year throughout the designated period, we observed positive (but not significant) value creation effects of 0.07% for CAR [−1, +1] and 0.55% for CAR [−5, +5] in the year 2021 following the disruptive impact of the COVID-19 pandemic.These effects were observed across both event windows, partially supporting the earlier conclusions by Alexandridis et al. (2017).Their research proposed a rapid recuperation in the value creation effects of acquiring firm shareholders following a period of economic turmoil.In the other years, we saw predominantly negative CAR values without statistical significance.To conclude, there was at least little evidence of positive effects on the announcements of M&A transactions.A similar pattern emerged when examining a pre- COVID-19 (2017COVID-19 ( -2019) ) and an interim-COVID-19 (2020-2022) subsample regarding CAR values.Although not statistically significant, there were indications that the negative effect persisted.At least, there were scarcely any indications that positive effects had emerged.There were no differences between the two subsamples (see Table 3).Additionally, no significant outliers existed in any of the data periods considered.
In the next step, we used multiple linear regression (see Table 4) to investigate the relationship between the announcement effect and the CSR performance of the firms in our sample, as well as several control variables (industry, year, transaction value, and listed target).A detailed collinearity diagnosis was performed to examine potential adverse effects on the regression results closely.However, the dataset did not reveal signs of multicollinearity.Furthermore, the regression model was tested for outliers, heteroskedasticity, and possible first-order autocorrelation of the residuals.We found no limitations in performing the regression.
As the indicator variables, Industry and Year have ten (number of industries in the sample) and six (number of years in the sample) levels, respectively; the Chemicals industry and the Year 2017 are set as the reference categories in the regression analysis.To determine an overall significant relationship between the dependent variable and the set of explanatory variables, an F-test was conducted, yielding a value of 0.781.The ANOVA output disclosed a significance value of 0.697 and was not statistically significant.Therefore, the selected independent variables, including the ESG score and several control variables, do not significantly impact abnormal stock returns following an M&A announcement.The adjusted coefficient of determination R 2 value of negative 0.068 states that the model cannot explain the variance of the dependent variable.A possible explanation for the model's poor fit might be that the selected explanatory variables do not appropriately reflect changes in abnormal stock returns.As a result, ESG scores, as a proxy of firms' level of CSR, are not statistically relevant to the creation of short-term shareholder wealth.The ESG scores used in our regression were aggregated factors of different ESG dimensions.The ESG scores we used were composed of five individual elements provided by FactSet: (1) Business Model and Innovation, (2) Environment, (3) Leadership and Governance, (4) Human Capital, and (5) Social Capital.In a further regression, we replaced the "meta-ESG score" used in the first regression and included the five individual scores for each company.The results can be found in Table 5, where it can be seen that the individual observations also showed no significant correlations.This confirms the results from the first regression model.Thus, concerning our research question, we did not find any evidence of a significant positive or negative relationship in our sample of German M&A transactions between the level of CSR, measured by the corresponding ESG score, and value creation.This result confirms our hypothesis that an acquiring company's level of CSR does neither positively nor negatively influence investor perception, as measured by abnormal stock returns in the context of M&A announcements.
Our research adds new evidence to the still ambiguous results in the existing literature and contributes to the discussion on the potential influence of CSR in M&A transactions.Our work is the first paper that analyzes the relationship between CSR performance and value creation for the German market, using transaction data from M&A deals.In essence, the findings of this paper contradict recent study results arguing for a positive correlation between the acquirer's CSR level and abnormal stock returns after the M&A announcement.The works by Deng et al. (2013) and Zhang et al. (2022) found positive value effects for the US and for an international sample of developed countries.However, our sample did not provide evidence for such a correlation regarding German-listed firms between 2017 and 2022.
While our findings contradict the abovementioned studies, they do not stand alone.They align with early results of Arlow and Gannon (1982) and Aupperle et al. (1985), who did not find a statistically significant relationship between CSR and financial performance.More recently, and in alignment with our hypothesis, Meckl and Theuerkorn (2015) also found no statistical correlation between the CSR dimension and abnormal returns.Moreover, our results are consistent with the study conducted by Fatemi et al. (2017) for a Japanese sample.The authors observed that the ESG performance of Japanese acquirers exerts no statistically significant influence on abnormal returns.They explained this finding by arguing that Japan's market for corporate control has become more competitive and, consequently, behaves now similarly to those of Western countries.Furthermore, our results are consistent with a recent study for the Chinese market (Li et al. 2019) and a study using a sample of 23 emerging markets (Yen and André 2019).Likewise, Tampakoudis et al. (2021) investigated the relationship between ESG performance and abnormal announcement returns for a similar recent period.However, they used a slightly different research design, primarily focusing on differences in shareholder wealth creation before and during the COVID-19 pandemic, and their sample included only US firms.The authors did not find a significant positive value effect of ESG performance on acquiring shareholders independently from the COVID-19 pandemic (Tampakoudis et al. 2021).Our findings confirm the results of these studies.
Thus, based on the findings of our study for the German market and the results reported in other regions, the positive results presented by Deng et al. (2013) and Zhang et al. (2022) may ultimately prove to be isolated statistical artifacts.Despite the smaller dataset and the focus on the German M&A market, our results cast doubt on the generalizability of the findings in the works of Deng et al. (2013) and Zhang et al. (2022).Possible differences could arise from the varying study periods, which at least call into question the time-independence of the results.Additionally, Zhang et al. did not provide separate results for the regions included in their sample, leaving room for speculation about contradictory or insignificant outcomes within their international sample.Differences in the industrial composition of the markets considered, or generally differing M&A markets with regard to investor culture, could also explain the discrepancies in the results.
In conclusion, we found no evidence that would either support positive effects from the stakeholder value maximization view or negative effects from the shareholder expense view.The stakeholder value maximization view, on the one side, argues that CSR performance may positively impact M&A through benevolent stakeholders supporting a deal and through intensified analyses of cultural fit and other risk factors such as ecological or social risks.In an M&A setting, in high-CSR firms, this positive effect should be visible in better financial performance and positive investor perception since benevolent stakeholders may often influence decisions in such transactions and play an important role in postmerger integration (Deng et al. 2013).However, we did not find evidence that confirms this view.On the other side, we also did not find evidence that would support the negative effects suggested by the shareholder expense view.This view suggests that engaging in socially responsible activities can incur additional costs, potentially putting a company at a competitive disadvantage.Companies that heavily invest in CSR initiatives might face financial constraints, which could hinder their ability to effectively integrate acquisition targets.Furthermore, investors might anticipate higher integration costs due to the need for alignment and standardization of CSR practices.Lastly, prioritizing CSR efforts could divert management's attention from the rigorous execution required in M&A transactions.In summary, these negative effects should result in negative investor perceptions.
While we believe that the arguments from both contrasting views hold logical validity, we conclude that both effects exist but balance each other, thus confirming our hypothesis.As demonstrated in Figure 1, there are positive effects from CSR activities and negative effects from CSR activities that should be considered by investors.We believe that our results provide evidence that, in the perception of investors, these effects in aggregate cancel each other out, thus leading to insignificant results in our sample.Consequently, the empirical evidence suggests that the market is undecided about CSR's valueenhancing effects in M&A surroundings.
We acknowledge that alternative explanations could explain our results, as well as the differences observed when compared to the findings of Deng et al. (2013) and Zhang et al. (2022).Different methodological approaches could explain the contradictory empirical results in some of the existing literature.McGuire et al. (1988) pointed out that social responsibility affects firm performance in several ways.Hence, selecting explanatory variables and research design might substantially influence the findings.For instance, a minority of studies have investigated the CSR performance of the target (Aktas et al. 2011;Teti et al. 2022) instead of the acquirer, or the difference between the target's and the buyer's CSR levels (Cho et al. 2021).Additionally, the difference in the definition of CSR and ESG might probably be a reason for the difference in the results of various studies.Furthermore, while we believe that the arguments provided in favor and against a value effect of CSR in M&A hold true, practical experience in M&A may suggest different results, overshadowing the effects postulated in Section 2 and summarized in Figure 1.While companies with strong CSR performance may exhibit better relations with stakeholders, as the stakeholder value maximization view suggests, in M&A transactions, other factors such as financial performance, previous integration experience, or shareholders' perceptions of the takeover price may play a far more significant role.In addition, CSR might not serve as a sufficient measure of cultural fit between the acquiring and target companies.The acquirer's focus on CSR performance might not necessarily translate into a greater focus on cultural synergies with the target firm, which one might expect to facilitate smoother integration.Likewise, detecting other risk factors, such as ecological or human resource risks, may probably be more influenced by the professional execution of the due diligence process rather than focusing on CSR performance.In sum, these tempering effects could explain the absence of significant positive results.
Conclusions
This study explored the potential impact of an acquirer's level of CSR (measured through its ESG score) on cumulative abnormal stock returns following an M&A announcement.The sample included 231 M&A transactions announced by DAX-listed companies between 1 January 2017 and 31 December 2022.While most studies in this research field apply accounting-based, long-term profitability measures, this paper complements the literature focusing on short-term value creation and measuring abnormal stock returns with the event study methodology.Consequently, we assume an investor-oriented perspective in the study.It is the first study that analyzes the relationship between CSR performance and value creation in M&A transactions for the German market after implementing the NFRD in the EU.
Our results contribute to the ongoing debate regarding the crucial and inconclusive question of whether a higher level of CSR is value-enhancing or value-destroying.As the literature remains ambiguous, with several studies finding positive announcement effects for high CSR acquirers (Aktas et al. 2011;Deng et al. 2013;Cho et al. 2021;Zhang et al. 2022) and other studies documenting opposing results (Meckl and Theuerkorn 2015;Fatemi et al. 2017;Yen and André 2019;Tampakoudis et al. 2021), we offer a new, balanced perspective on the issue.With our study, we provide an extensive discussion and a comprehensive summary of the potential positive and negative effects of an acquirer's CSR engagement on value creation in M&A situations.Contrasting, in particular, the findings of Deng et al. ( 2013) and Zhang et al. (2022), our findings at least cast considerable doubt on the existence of a positive or negative correlation between CSR and value creation in M&A.Adopting a multiple linear regression model, the results could not provide sufficient statistically relevant evidence that a company's level of CSR positively or negatively affects abnormal stock returns following an M&A announcement.Since both the positive and negative arguments regarding the impact of CSR on M&A transactions hold validity, our results suggest that they potentially offset each other.This balance results in a net effect where neither a distinctly positive nor negative influence on investor perception of CSR activities in the context of M&A is observed.This ambiguity highlights the complex nature of CSR's role in M&A transactions.On the one hand, CSR initiatives may foster better stakeholder relations, and support from stakeholders such as employees, customers, and suppliers can be a strong positive signal in M&A situations (Deng et al. 2013;Cho et al. 2021;Segal et al. 2021).On the other hand, CSR initiatives involve high costs that could hinder integration efforts, trigger higher costs due to the need for alignment and standardization of CSR practices, and divert management's attention from the rigorous execution required in M&A transactions (Tampakoudis et al. 2021;Meckl and Theuerkorn 2015).In conclusion, the market appears to be overall undecided about the value-enhancing effects of CSR performance on M&A.
Our findings have important implications.First and foremost, the results of this study suggest that it is not rewarding for equity investors to pay attention to whether increased CSR investments have been made in potentially M&A-active German firms.CSR performance does not appear to be an effective predictor for the acquisition of companies in order to identify value-creating takeovers in Germany.Instead, our results suggest that CSR investments by acquirers imply both positive and negative effects that balance each other out in an M&A context.For fund managers, CSR investments are likewise not a positive trading signal that would promise excess returns, especially not in the runup to an anticipated acquisition.Second, corporate decision-makers should not view CSRrelated measures as value-adding investments.Finally, corporate decision makers should instead consider broader factors beyond CSR initiatives to create shareholder value (Yen and André 2019).They should evaluate the strategic rationale for M&A transactions and consider factors such as the strategic fit of the target company with the acquirer's business, the impact of the transaction on the company's overall financial performance, and the proper integration of the target.
Instead, and despite the presented findings of this study, CSR activities continue to be a central part of corporations' visions and strategies (Cho et al. 2021).As stakeholders globally demand increasing and more detailed disclosure of information regarding a company's social activities, awareness and sensibility for CSR aspects continue to become a more critical and established tool for companies' strategic development.While CSR does not appear to pay off for shareholders, corporate leaders must address ongoing challenges such as climate change and increasing pressure from stakeholders to adopt more sustainable business practices.Therefore, German listed firms, specifically, and firms on an international scale should continue incorporating social, sustainable, and ethical standards into their investment decisions.Although CSR activities do not necessarily create immediate value for shareholders in the short run, they have become indispensable to sustainable development, as demanded by policymakers and other stakeholders alike.They could result in value creation in the long run.
Our study may be subject to several limitations, which could pave the way for future research.Initially, 231 corporate transactions were identified that fit the research design.However, many observations did not disclose data on the deal size, resulting in an unbalanced sample and limiting the sample size to 56 M&A transactions in the multiple linear regression model.Although the parties involved in a deal commonly do not disclose the transaction value, this cogitable practice resulted in a relatively small sample size.Thus, it is essential to understand that the research findings must be considered carefully, as they may lack generalizability and validity.This may call for future studies with larger sample sizes.In addition, the covered period (2017-2022) offers limited insights as further regulatory changes in German and European contexts only become apparent in the case of consistent longitudinal data collection.Moreover, analyzing the impact of CSR through ESG scores adds another limitation.The assessment might lack some accuracy due to the slight difference in the definition of CSR and ESG (Barros et al. 2022;Damtoft et al. 2024).However, using ESG scores as a proxy for CSR engagement is common practice in the related literature (e.g., Deng et al. 2013;Velte 2017;Broadstock et al. 2020;Tampakoudis et al. 2021;Barros et al. 2022) and allows for comparisons with former studies.Teti et al. (2022) added that ESG scores are composed of different factors, and disentangling these might lead to differentiated results.Furthermore, other previously applied measures of CSR, such as social audits or CEO surveys, have similar shortcomings.Thus, future research could address this issue by applying alternative measures of a firm's CSR level, using the difference between an acquirer's and a target's CSR performance (Cho et al. 2021), or analyzing the proxies' validity in general.
An additional limitation could arise from the low cross-company variation in ESG scores.The non-significant results in the regression model could be explained by the relatively low variation between companies in terms of their ESG scores, which makes differentiation difficult and negatively affects possible correlations in terms of significance.Additionally, our sample only contained transactions from German DAX-listed companies.While the German DAX index represents about 80% of the market capitalization of German stock-listed firms, it is still a limited focus.Furthermore, studies focusing on a single region may be subject to distinctive circumstances regarding firms' CSR awareness and the regulatory landscape.Therefore, it would be interesting to investigate whether one can draw general conclusions for multiple markets or if research findings are only valid for the chosen region.Finally, we cannot control our results for the targets' ESG scores due to a lack of data on privately held target companies representing most of the targets considered.Future research could add more empirical evidence by additionally considering targets' CSR performance and investigating whether differences or similarities between ESG scores of the acquirer and the target influence announcement effects (Aktas et al. 2011;Cho et al. 2021;Tampakoudis and Anagnostopoulou 2020).
Against the background of these limitations, and with the literature remaining ambiguous, we suggest further research on the relationship between a company's level of CSR and both short-term and long-term value creation and the motivation for firms to engage in CSR activities.In this context, collecting more longitudinal data and formulating universally valid statements is essential for researchers and managers.Also, a new study employing a similar research design conducted post implementation of the new EU Corporate Sustainability Reporting Directive (CSRD) from 2024 onward could shed light on the effects of enhanced CSR performance in EU companies on value creation.Finally, researching the relationship between M&A and CSR could be further explored by reversing the causal direction and analyzing the impact that M&A has on CSR (Barros et al. 2022).
Figure 1 .
Figure 1.Potential impacts of CSR on acquirer's stock performance in the case of an M&A announcement in the short run.
Table 1
summarizes the individual steps of the sample selection process.
Table 1 .
Sample selection process.
Table 2 .
Descriptives about the acquiring firms.
Table 4 .
Model summary of regression analysis (meta ESG score).
Table 5 .
Model summary of regression analysis (five ESG dimensions).
|
v3-fos-license
|
2018-04-03T00:35:35.433Z
|
2015-05-01T00:00:00.000
|
13260650
|
{
"extfieldsofstudy": [
"Mathematics",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.plabm.2015.04.002",
"pdf_hash": "2d863896b5356925c57a9634dbb135f86ac2b6d6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:814",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"sha1": "2d863896b5356925c57a9634dbb135f86ac2b6d6",
"year": 2015
}
|
pes2o/s2orc
|
Analytical evaluation of a BNP assay on the new point-of-care platform respons®IQ
a) Objectives: respons®IQ is a new point-of-care (POC) immunoassay platform utilizing evanescent field total internal reflection fluorescence (TIRF) detection and active microfluidics controlled by optical sensors. A B-type natriuretic peptide (BNP) assay was developed on this system. The objective was to show that the BNP test fulfils the basic requirements regarding analytical performance, storage stability of cartridges and correlation to reference systems to be used as a POC test. b) Design and methods: Analytical sensitivity and imprecision were determined in 10 separate experiments over a period of one year. Cartridge storage stability at 4–7 °C and 37 °C was tested. The correlation of responsIQ whole blood measurements to a POC reference device and a laboratory analyzer was determined using 100 patient samples. c) Results: Limit of detection (LOD) was 2.3±1.0 pg/ml BNP and within-run coefficient of variation (within-run CV) was 4.8±1.4% down to a concentration of <40 pg/ml BNP. Cartridge storage stability at 4–7 °C was greater than 50 weeks and at 37 °C, stability was three weeks. The correlation of responsIQ results with both reference methods was high (r≥0.972). d) Conclusions: The developed BNP test fulfils the basic requirements for the performance parameters defined above. The test׳s sensitivity was in the performance range of laboratory analyzer BNP tests. This is the first extensive proof of concept of the responsIQ system.
Introduction
Over the last two decades, many research projects have sought to generate new analytical sensor devices applying microfluidic sample processing. Despite the diversity of the newly developed test systems only a very small number of devices have made the way to the market in the field of point-of-care (POC) immunoassay systems [1]. For applications in which quantification and high sensitivity are less important, the lateral flow test is still the tool of choice due to its simplicity and low production cost [1]. On the other hand, devices for applications with high demands regarding sensitivity and accurate quantification cannot yet match the performance of laboratory analyzers.
pes diagnosesysteme has developed a microfluidic device for POC diagnostics with active fluidics and total internal reflection fluorescence (TIRF) detection. The device consists of a single-use cartridge, which contains all biomaterials, and an instrument, which pneumatically moves the liquid, controls the microfluidic assay steps with the aid of optical sensors and reads the TIRF assay. The responsIQ is to our knowledge the only POC system that controls the fluidic action by optical sensors, which detect the position of the sample within the cartridge. These optical sensors also control the sample volume, which is loaded on the cartridge by means of a 50 mL positive displacement pipette.
As proof of concept, a BNP test has been installed on the system. BNP is a clinical marker for the diagnosis of heart failure and for risk assessment in cases of acute coronary syndrome. As BNP is present in human plasma only in low pg/mL up to low ng/mL levels, a highly sensitive test is required. The use of a typical cut-off concentration in clinical practice of 100 pg/mL for patients presenting with acute onset or worsening of symptoms or alternatively of 35 pg/mL for patients with non-acute presentations [2] requires a quantitative assay with high precision at the decision cut-offs. The objective of this study was to show that the BNP test fulfils the basic requirements regarding analytical performance, storage stability of its cartridges and correlation to reference systems to be used as POC test. Therefore, determination of detection limit and imprecision, a study on cartridge storage stability and a comparison to reference systems using patient samples were performed.
Materials
A pair of commercially available sandwich murine anti-BNP monoclonal antibodies (mAb) was used for BNP detection. The anti-BNP clone 50E1 from Hytest Ltd (Turku, Finland) was used as capture antibody. The anti-BNP clone 24C5, also from Hytest Ltd, was used as detection antibody.
Gilson Microman pipettes with fixed volume of 50 mL (custom-made adaption) and tips were made by Gilson S.A.S. (Villiers-le-Bel, France).
RFID tags MiniTrack Paper Tag 3002077 were ordered from Smarttrac Technology Group (Frankfurt am Main, Germany). All other chemicals were from Sigma-Aldrich Chemie GmbH (Munich, Germany).
Samples
EDTA-anticoagulated whole blood from patients was received from a local cardiologist. These specimens were left over from routine checkups of the patients and all samples were anonymised.
EDTA plasma was generated by centrifugation of whole blood samples at 2830 g for 10 min and collection of the supernatant.
responsIQsystem
responsIQ is a new POC immunoassay platform and comprises a readout instrument and ready-to-use cartridges (Fig. 1A). The platform requires 50 mL of sample (either whole blood or plasma). Measurement is performed in less than 10 min. Several safety features, such as integrated control of sample volume and flow as well as storage of calibration and lot specific data on each cartridge, are applied.
responsIQ measures the rate of increase of the fluorescence signal, which is proportional to the concentration of analyte in solution. The resulting signal slope in V/s is read off the calibration curve by the analysis software and the BNP result is displayed on the instrument's screen.
Further background information on the responsIQ has been published in a patent application [3]. The measurement principle of the responsIQ has already been described by Rascher et al. [4], who have developed a procalcitonin assay on the system (Video A1).
Cartridge design and production
The single-use cartridge contains a micro-fluidic system with an optical area for detection (Fig. 1B). The integrated assay reagents consist of a BNP sandwich assay. Within the measuring cell, three lines of BNP capture antibody are immobilized.
The anti-BNP detection antibody was labeled in-house with the activated cyanine dye S 0458 (λ ex,max ¼647 nm/λ em,max ¼664 nm, 2-3.5 mol dye per mol antibody) to form the BNP detection conjugate (detection antibody). The detection solution contains the detection antibody as well as buffer components and blocking components. The detection solution is dispensed into the detection antibody zone (Fig. 1B).
All assay reagents are dried into the channels for stability reasons. An integrated RFID-tag stores test-and lot-specific data. The cartridge is packaged in Alu-PE pouch with a silica gel pack.
Calibration of responsIQ cartridges
A low endogenous EDTA plasma pool was generated by selecting and mixing samples with BNP concentration less than 15 pg/mL. The BNP concentration of the pool was measured on a Siemens ADVIA Centaur s (Siemens Healthcare Diagnostics, Erlangen, Germany) and the pool was stored at À80 1C in aliquots.
Glycosylated proBNP, which is the prohormone of BNP and comprises the BNP peptide chain, was used for calibration, as it is considerably more stable towards degradation in plasma matrices than BNP [5]. Also, glycosylated proBNP has been shown to be the major component of immunodiagnostically detected BNP in patients with heart failure [6]. The manufacturer of the glycosylated proBNP (Hytest Ltd) determines the antigen mass of glycosylated proBNP with respect to the peptide content of the molecule (M¼ 11905.5 g/mol). The BNP molecule has a lower molecular weight of 3464 g/mol. The concentration of a calibrator (in pg/mL) is labeled with respect to the BNP fraction of the calibrator, termed 'BNP equivalent' in this work, and translates to a 3.44 times higher mass of contained glycosylated proBNP.
For calibration of a new cartridge lot, the following concentrations were spiked into a low endogenous EDTA plasma pool and each calibrator was measured 3 times (see also Section 3.1): À for low concentration calibration curve: 0, 70, 150, 270, 400 pg/mL BNP equivalents and À for high concentration calibration curve: 700, 1300, 1900, 2500 pg/mL BNP equivalents.
The calibration curves were generated by least squares fit. The intersection of both curves was calculated and used as the switch point for the domain of calibration (Fig. 2). The calibration data was written to the RFID chips of the remaining cartridges of the production lot. Afterwards the cartridges were packaged into Alu-PE pouches containing dry packs.
Measurement
50 mL of sample (either EDTA whole blood or EDTA plasma) was loaded on the sample port of the cartridge by a Gilson Microman pipette. The cartridge lid was closed and the cartridge inserted into the responsIQ instrument. The cartridge was automatically processed by the instrument and the used cartridge ejected automatically after measurement. The BNP result was displayed on the instrument's screen and could be printed on the integrated printer.
Calculations and statistics
The limit of detection (LOD) was calculated by dividing two standard deviations of the low endogenous plasma pool measurements by the slope of the dose-response curve. The statistical significance of differences between two groups was evaluated using Student's unpaired t-test. Values of p o0.05 were considered significant.
For method comparison, Passing & Bablok fit was used. Correlation coefficients were calculated using least squares fit.
Reportable range and calibration curve
The reportable range of the responsIQ BNP test was 5-2500 pg/mL BNP. The BNP signal rises steadily with concentration in this range, but the slope of the dose-response curve is lower at the low concentration levels. To account for this non-linear behavior, two split linear regressions, one for the low concentration range and one for the high concentration range, were fitted to the respective data (Fig. 2). The slope of the linear regression for the low concentration range was typically 25% lower than the slope of the regression line for the high concentration range.
Linearity within each calibration domain was good with high correlation coefficients. The quality of the calibration curve at low concentrations is especially important as the diagnostic decision values are located in this domain.
Limit of detection and imprecision
In order to analytically validate and characterize the responsIQ BNP test, the LOD and the within-run coefficient of variation (within-run CV) were determined in ten separate runs over a period of about one year. Each run consisted of ten determinations of low endogenous BNP plasma pool (BNP concentrations from 3.5 to 7.8 pg/mL BNP as determined on ADVIA Centaur) as well as five determinations of each calibrator with 30 pg/ mL, 60 pg/mL and 90 pg/mL BNP equivalents spiked into the respective low endogenous BNP plasma pool. Nine different cartridge lots and five different instruments were included in this experiment as indicated in Table 1. The resulting LODs and within-run CVs are shown in Table 1.
The data show that the responsIQ BNP test is sufficiently sensitive for the use as clinical BNP test. The LOD is very low for a POC system and is comparable to the sensitivity of laboratory analyzer BNP tests [7,8].
A constructional difference of the responsIQ cartridge compared to most POC devices might explain its high sensitivity for analytes that tend to adsorb to surfaces. The responsIQ cartridge contains no porous materials or membranes, which would display a high surface area to the sample. Most POC devices contain such high-surface-area elements, which can potentially lead to adsorption of analyte molecules to this surface and as a consequence to reduced sensitivity. Highly charged proteins, such as BNP or Troponin I, tend to adsorb to surfaces [9]. BNP has an isoelectric point of 10.95 [10] and Troponin I an isoelectric point of 9.9 [11], which leads to highly charged species of these two analytes at neutral pH.
Shelf life and accelerated temperature storage
The shelf life of the cartridge unit at 4-7 1C was assessed. A cartridge lot was produced and stored at 4-7 1C. Cartridges were tested with goat serum containing 1000 pg/mL BNP equivalents on the first day following production as well as on subsequent measurement days up to 50 weeks following production using the same instrument throughout the complete study. The data is displayed in Fig. 3. It shows that even after 50 weeks of storage at 4-7 1C no significant reduction of the BNP signal could be observed. Thus, a cartridge shelf life of 50 weeks or greater could be demonstrated.
For accelerated temperature testing, cartridges were stored at 37 1C for 3 weeks starting one week following production. Cartridges were measured with goat serum containing 1000 pg/mL BNP equivalents (data not shown). For comparison, cartridges from the same lot, which were stored for the entire period at 4 1C, were measured in the same way. There was no significant change in the assay signal when both storage groups were compared (signal difference: 1%, each group N¼6, p¼0.744). Thus the cartridges can be considered to be stable at 37 1C for at least 3 weeks.
Comparison to reference systems
A comparison to reference systems using patient samples was carried out. The goal was to determine the correlation of responsIQ whole blood measurements with an established POC device (Alere Triage s BNP, Alere Inc., Waltham, MA, USA) as well as with an established laboratory analyzer device (Siemens ADVIA Centaur s BNP, Siemens Healthcare Diagnostics). One responsIQ instrument was installed at a local cardiologist's office. There, leftover EDTA/whole blood samples were measured on the responsIQ BNP test (single determination) on-site by nurses in parallel to the routine Triage BNP test. In total, 100 patient samples were analyzed over a period of about 8 weeks without any pre-selection of the samples. The hematocrit of all samples was determined (mean value: 41%, range: 22-51%).
Comparison of responsIQ BNP values to Triage BNP is displayed in Fig. 4A. The analyzers show good overall correlation (r ¼0.972). The slope of 1.06 shows that, on average, both systems give comparable results.
For comparison to the laboratory analyzer ADVIA Centaur BNP the 100 samples described above were transferred to the laboratory of pes diagnosesysteme one to five hours after blood draw. Within the following hour, whole blood samples and plasma samples were measured in duplicate on the responsIQ. The remaining plasma was frozen at À 80 1C immediately afterwards and sent to a clinical lab for measurement on the Centaur BNP test later. The reason for the second determination of the whole blood samples on the responsIQ for comparison to Centaur BNP was the low storage stability of BNP in whole blood samples at room temperature. Two samples were excluded from Centaur measurement because these had not been stored as specified.
Comparison of responsIQ BNP whole blood measurements with the laboratory analyzer test Centaur BNP is displayed in Fig. 4B. Again, a strong correlation can be observed with r ¼ 0.974. The slope of 1.29 indicates that results determined by the responsIQ are typically higher than the Centaur BNP results. The intercept of the regression line is negligible.
The results show that the responsIQ BNP measurements from whole blood correlate to both reference devices used in this study. From Fig. 4 it can be seen, that responsIQ BNP values show less variation from the regression line when correlated to the lab analyzer ADVIA Centaur BNP than to Alere Triage BNP. The tighter correlation to the Centaur BNP might be in part due to similar epitopes of the antibody pairs used in Centaur BNP and responsIQ BNP assays as well as the duplicate measurement which was carried out on the responsIQ in this comparison. The results from whole blood and plasma both measured on the responsIQ were also compared (data not shown) and show high correlation (r ¼ 0.998). The Passing and Bablok fit resulted in a slope of 0.89 and an intercept of 5.44 pg/mL BNP. Thus, whole blood result values were on average 11% lower than the related plasma results.
Conclusions
In this study, the performance parameters limit of detection, imprecision, storage stability of cartridges and correlation to reference systems were characterized. It was shown that the responsIQ is a suitable device for the quantitative analysis of BNP from clinical samples. Also, the basic requirements for cartridge storage stability could be met. The sensitivity was in the performance range of laboratory analyzer tests. In conjunction with the safety features installed on the system, the responsIQ could help make POC analysis safer. This is the first extensive validation of an assay on the responsIQ system.
|
v3-fos-license
|
2021-08-20T13:45:49.179Z
|
2021-08-20T00:00:00.000
|
237219052
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ghrp.biomedcentral.com/track/pdf/10.1186/s41256-021-00216-0",
"pdf_hash": "d46dd04028fdf186c9ec96936e12bf40fd40a8a5",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:816",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "d46dd04028fdf186c9ec96936e12bf40fd40a8a5",
"year": 2021
}
|
pes2o/s2orc
|
A systematic review of qualitative literature on antimicrobial stewardship in Sub-Saharan Africa
Background Antibiotic resistance is a major problem in every region of the globe and Sub-Saharan Africa (SSA) is no exception. Several systematic reviews have addressed the prevalence of resistant organisms but few have examined the underlying causes in this region. This systematic review of qualitative literature aims to highlight barriers and facilitators to antimicrobial stewardship in SSA. Methods A literature search of Embase and MEDLINE(R) was carried out. Studies were included if they were in English, conducted in SSA, and reported qualitative data on the barriers and facilitators of antimicrobial stewardship or on attitudes towards resistance promoting behaviours. Studies were screened with a simple critical appraisal tool. Secondary constructs were extracted and coded into concepts, which were then reviewed and grouped into themes in light of the complete dataset. Results The literature search yielded 169 results, of which 14 studies from 11 countries were included in the final analysis. No studies were excluded as a result of the critical appraisal. Eight concepts emerged from initial coding, which were consolidated into five major themes: ineffective regulation, health system factors, clinical governance, patient factors and lack of resources. The ineffective regulation theme highlighted the balance between tightening drugstore regulation, reducing over-the-counter sale of antibiotics, and maintaining access to medicines for rural communities. Meanwhile, health system factors explored the tension between antimicrobial stewardship and the need of pharmacy workers to maintain profitable businesses. Additionally, a lack of resources, actions by patients and the day-to-day challenges of providing healthcare were shown to directly impede antimicrobial stewardship and exacerbate other factors which promote resistance. Conclusion Antibiotic resistance in SSA is a multi-faceted issue and while limited resources contribute to the problem they should be viewed in the context of other factors. We identify several contextual factors that affect resistance and stewardship that should be considered by policy makers when planning interventions. This literature base is also incomplete, with only 11 nations accounted for and many studies being confined to regions within countries, so more research is needed. Specifically, further studies on implementing stewardship interventions, successful or not, would be beneficial to inform future efforts.
Background
According to The World Health Organisation (WHO), Antimicrobial resistance (AMR) directly threatens frontline clinical care, limiting our ability to treat infections as well as increasing the risks of interventions such as surgery and chemotherapy [1]. AMR also limits development by draining the global economy and reducing productivity due to sickness [1]. While considerable research is dedicated to the epidemiology of resistant organisms and novel therapeutics, another important facet is the clinical and behavioural factors driving resistance [2,3].
AMR is a growing problem in Sub-Saharan Africa (SSA) and is complicated by a lack of data [4]. One systematic review analysing resistance prevalence in Africa found that there was no data for 40% of African countries [4]. This is partly due to a paucity in quality-assured microbiology laboratories in the region, along with AMR being a low priority compared to other public health concerns [5,6]. Furthermore, according to Essack et al. [7], only 4.3% of countries in the WHO Africa region have national AMR plans while 14.9% have national infection prevention and control policies.
The data that are available demonstrates a significant problem. One systematic review found that E. coli isolates had a median resistance of 88.1% to amoxicillin and 80.7% to trimethoprim, while 34% of H. influenzae isolates were resistant to amoxicillin [4]. There is also considerable resistance to WHO-recommended first-line drugs [8]. The WHO-recommended treatment for sepsis in children under 2 months of age is ampicillin and gentamicin [8]. According to systematic review data, the median non-susceptibility rate of Klebsiella isolates from paediatric infections in SSA was 100% (IQR 71-100) for ampicillin and 49% (IQR 48-58) for gentamicin [8]. Additionally, the WHO acknowledge that in many developing countries illnesses such as pneumonia and dysentery can no longer be treated with first-line medications [1]. Without prompt action these trends will likely worsen and countries with stretched health resources, whose patients cannot afford the required second or third-line antibiotics, will be disproportionately affected.
There is considerable research dedicated to combatting AMR, especially in resource-limited settings [1,5]. The behaviours which drive resistance are thus relatively well defined [5]. Within SSA there are many examples of cross-sectional surveys of the prevalence of these behaviours, which include patient self-medication, overthe-counter (OTC) sales of prescription-only antibiotics and over-prescribing of antibiotics [9,10]. While these surveys identify what behaviours cause resistance it is also important to identify the underlying drivers of these behaviours. A qualitative approach can provide rich data from patients, healthcare staff and public health professionals describing why resistance-promoting behaviours happen. These data are of value to policymakers; highlighting key determinants and context of antibiotic resistance.
Systematic review and synthesis of qualitative data is a reasonably new methodology but one that has gained acceptance in scientific literature. Indeed, the Cochrane collaboration recently called reviews of qualitative evidence a "new milestone for Cochrane" [11]. There are many methods of qualitative synthesis, each having evolved from different fields [12,13]. There is little consensus on the best method, with each having their own strengths and weaknesses [12]. Studies must therefore be designed based on the questions they intend to answer [12,13].
There have been significant efforts to research barriers and facilitators to antimicrobial stewardship (AMS) in Sub-Saharan Africa, but to our knowledge no synthesis of qualitative literature has yet been published on the subject. The objective of this review is to highlight barriers and facilitators to antimicrobial stewardship and sociocultural factors driving antimicrobial resistancepromoting behaviour in patients and healthcare staff in Sub-Saharan Africa. We hope that this will provide policymakers with a more comprehensive view of the underlying factors which need to be addressed to curb AMR in this region and highlight gaps in the literature.
Research methodology
The methodology for this review was guided by the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines and checklist [14]. Selection of methodology was guided by the review written by Bearman and Dawson [13]. Specific information on how to extract, code and analyse qualitative themes was sourced from Butler et al. and Seers [15,16]. Given that we are attempting to summarise current literature and identify key recurrent messages, thematic analysis was selected as our method of qualitative synthesis [13].
For the purposes of this review, the United Nations Development Programme's definition of Sub Saharan Africa was used to define geographical inclusion [17].
Search strategy and selection criteria
Ovid online was used to search Embase and Ovid MEDLINE(R). There were no restrictions with respect to date of publication. Results were limited to publications in English. The last search occurred on 19/05/2020. Multiple searches were conducted, used terms included ' Antibiotic Resistance' or ' Antimicrobial Stewardship' along with "Africa South of the Sahara" and 'Qualitative Research' . All terms were exploded and then combined with the Boolean operator AND. Medical subject headings (MeSH) terms were also included. The full search strategy for Embase and MEDLINE(R) can be found in "Appendices 1 and 2", respectively.
The authors also searched for cross-sectional surveys, as some of these studies had qualitative elements to them. This was done systematically via similar keywords to above but substituting 'qualitative research' for 'cross-sectional survey' . The references of included studies were also searched for additional papers. Studies first underwent abstract screening to ensure they met the inclusion criteria and then full-text screening and data extraction ( Table 1).
Critical appraisal
Critical appraisal was conducted by GJP. All included studies were evaluated using the CASP (Critical Appraisal Skills Programme) qualitative research appraisal tool, a 10-item checklist covering domains including research design, data collection and analysis [18]. The first 9 are answered 'yes' , 'no' or 'can't tell' and the remaining question asks for a subjective evaluation of the value of the study [18]. Studies were scored principally by the first author. The first three CASP questions provide a screening tool to evaluate if the research question of the study can or should be assessed via qualitative methodology. Failure on this section would result in exclusion of the study. Meanwhile, the latter questions were the taken into account when resolving disagreements between studies.
Data extraction and synthesis
Data were defined as secondary constructs, that is to say the researcher's interpretations and conclusions, rather than direct quotes from study participants. Thematic analysis, as described by Seers and Bearman and Dawson, was then conducted [13,16]. The first author read each paper and coded secondary constructs, grouping them into various concepts. These concepts were then reviewed and simplified into themes once all studies had been coded. The final models were evaluated by senior authors (SO and MB) to ensure they were consistent with the source material.
The summary measure of this review was refutational and reciprocal synthesis across studies about the barriers to AMS implementation and the causes of resistance-promoting behaviour. Other information collected included country of origin, the number and occupation of interviewees in the study and information about healthcare staff and patient's perceptions of AMR as a threat (Table 1).
Included studies
Excluding duplicates, the literature search yielded 169 results, of which 138 were excluded on abstract screening. This resulted in 35 papers which underwent fulltext review and 14 papers included in final analysis. A PRSIMA flow diagram can be found in Fig. 1. Data were found relating to 11 of 46 SSA countries.
The average CASP score was 7.8/9 and the lowest score was 7/9. The most common omission was a lack of discussion of the relationship between researchers and participants, which occurred in 5 of the 11 studies. Additionally, 4 studies either did not explicitly detail recruitment strategy or used either subjective or selective recruitment criteria. Table 2 illustrates the results of the critical appraisal.
Eight concepts emerged upon the first round of coding, which were condensed into 5 main themes: ineffective regulation, healthcare system factors, clinical governance, patient factors and lack of resources. The original concepts can be found in "Appendix 3".
Ineffective regulation
This theme describes a lack of regulation at country or region-level of resistance-promoting behaviours. Torres et al. noted that while there are laws against OTC sale of antibiotics in Mozambique these were rarely enforced [19]. Moreover, all pharmacy workers interviewed in Addis Ababa by Gebretekle et al. [20] mentioned that the weak or non-existent enforcement of regulation was a major driver of inappropriate dispensing.
There is an apparent tension between medicines access and regulation. This was highlighted by Charani et al. [21], stating that while tightening regulations would probably lower the rate of OTC antibiotic sales it could also reduce access to medications if drug sellers were shut down. This is supported by Yantzi et al. [22], who added that more remote communities, who could often not afford to travel to a clinic to obtain a prescription, would be disproportionately affected by this.
Healthcare system factors
This theme relates to the nature of the healthcare systems in SSA encouraging resistance-promoting behaviours. It was sub-divided into health system heterogeneity and pharmacies as a business.
Health system heterogeneity
Healthcare professionals interviewed in Burkina Faso stated that patients often saw a combination of local healers, pharmacists, private and public healthcare services regularly [21]. This system allows patients to 'shop around' for a service that will provide antibiotics [21]. It also limits a clinician's ability to obtain an accurate drug history, making it challenging to prescribe an antibiotic the patient has not recently received [21]. Complex healthcare systems are also harder to regulate, with some authors noting that this is further complicated by the black market and more targeted medication sellers such as 'pension markets' , which are aimed at older adults [21,23].
Pharmacies as a business
Interviews of drug store customers in Dar es Salaam indicated that if a pharmacy refused to sell antibiotics then customers would simply go to another [24]. Pharmacy workers interviewed by Gebretekle et al. [20] reinforced this, adding that pharmacy owners would reprimand or dismiss workers who refused sales on the grounds of stewardship. Equally, while many pharmacy customers in Blantyre felt that it was reasonable to be denied antibiotics unless they had a prescription, many also argued that pharmacies were primarily businesses and thus should never refuse sales [25]. This was also highlighted by Dillip et al. [26]. They found that even among Tanzanian accredited drug dispensing outlets, which are certified to follow national dispensing guidelines, inappropriate antibiotic dispensing was common due to the need for profit and the fear that customers would simply go elsewhere [26].
Clinical governance
This theme relates to lack of AMS guidelines or lack of adherence to them. Gebretekle et al. [27] found that a major barrier to implementation of AMS programmes in an Ethiopian tertiary care hospital was a lack of support for AMS policy at institutional and national level. Furthermore many junior physicians routinely prescribed "safe" broad-spectrum antibiotics out of fear of receiving a negative career evaluation if they used a narrow-spectrum one [27]. This was echoed by physicians in surgical wards who would prolong the use of pre and post-operative antibiotics to prevent infectious complications which they would be blamed for [27]. In Legenza et al. 's [28] study in South Africa only 30% of clinicians knew about C. difficile guidelines, with even fewer being able to correctly recall them. Furthermore, many healthcare professionals interviewed in Ghana repeatedly prescribed antibiotics based on personal preference and experience rather than referring to guidelines [29]. This was also true of prescribers interviewed by Pearson and Chandler and Yantzi et al. [22,30]. Additionally it was apparent that affordability and physical availability of antibiotics often dictated prescriptions more than guidelines [30]. Finally Yantzi et al. [22] added that prescribing a drug is often considered synonymous with a high standard of care by patients; adding to the pressure on clinicians to ignore stewardship guidelines. Adherence to guidelines was also examined by Rout and Brysiewicz [31], who argue that members of staff specifically trained to safeguard stewardship could help alleviate some of these problems.
Five papers in our study assessed the knowledge level of healthcare staff and they found that AMR is generally perceived as a significant threat, although this did not always translate into practice [21,26,27,29,30]. Pharmacy workers interviewed by Dillip et al. [26] in Tanzania could all correctly recite national antibiotic prescribing guidelines but all also admitted to ignoring these guidelines. Furthermore Gebretekle et al. [27] found that 90% of interviewed physicians recognised AMR as a national threat but more than half could not identify what organisms commonly caused resistant infections in their region.
Patient factors
This theme refers to actions by patients which encouraged resistance-promoting behaviour by healthcare professionals such as inappropriate dispensing of antibiotics.
It was commonly reported that patients recognised and remembered certain drugs and the symptoms they were prescribed for. This allowed them to demand antibiotics from the pharmacist directly, rather than attend a clinic or hospital first which was perceived by patients as a waste of time and/or money. This was a dominant theme in Torres et al. 's [19] study in Mozambique. Many of the patients in this study knew the exact name and dose of drug they wanted [19]. Similar patterns were illustrated in all four of the included studies that interviewed patients [19,[23][24][25].
Another paradigm explored by Agardh et al. [24] was that marginalised communities, such as men who have sex with men (MSM), may prefer to only visit pharmacies. This is because pharmacies require less information about their personal lives and are in less public places, reducing the chance of encountering members of their community who may enquire why they are receiving medication [24].
Lack of resources
This theme constituted a lack of the facilities required to appropriately prescribe antibiotics and overstretched health services necessitating practices that promote resistance. It was an over-arching theme that appeared in several of the other themes.
Four studies mentioned that a lack of laboratory facilities prevented antibiotic prescribing based on sensitivity testing [27][28][29][30]. Without sensitivity information clinicians must rely on resistance-fostering broad-spectrum antibiotics. Moreover, Legenza et al. [28] found that limited clinician time and a lack of IT infrastructure meant that often only the available cultures perceived as "more important" are checked.
The issue of limited ward time was echoed by Mula et al. [32] who studied 'workarounds': short-cuts taken on wards to reduce the time spent on certain tasks. Relevant examples include issuing rounded-up doses or simplified regimens that patients are more likely to understand and take less time to explain [32]. While these are arguably necessary due to the significant shortfall in healthcare staff, they also contribute to AMR.
The aforementioned problem of patients going straight to pharmacies also has roots in the lack of healthcare resources. Long wait-times at clinics, principally due to inadequate staffing, make skipping them an attractive option [20,22]. Equally many healthcare facilities in SSA have a very limited range of available antibiotics, resulting in patients being prescribed the same antibiotic on every encounter [19]. This increases the likelihood of patients remembering the drug name and dose and, in combination with the internet, is a major driver of OTC sales according to Torres et al. [19].
Key findings
To our knowledge this is the first systematic synthesis of qualitative studies surrounding antibiotic resistance in SSA. Studies were found for 11 out of the 46 SSA countries, and this lack of coverage is in keeping with findings from systematic reviews of surveillance data [4]. Additionally, many of the issues identified are either due to or exacerbated by the lack of resources in the study countries. Indeed, one could argue that healthcare system heterogeneity as a whole is a symptom of under-resourced healthcare. Lack of resources is not solely responsible for AMR in SSA and several contextual factors were repeated throughout the included studies. There was consensus that a tension existed between a pharmacy worker's role in upholding antimicrobial stewardship and the need for profit in a highly competitive economy. There was also conflict between the need for regulation of drug stores and the risk of limiting access to medications. Several studies highlighted the fact that many patients see going to a clinic as expensive and time-consuming when they can simply demand OTC sale direct from the dispensary. It is also apparent that despite ongoing efforts to educate staff about antimicrobial stewardship, resistance-promoting behaviours still occur in clinics and hospitals for a variety of reasons.
Comparison with existing literature
Our findings have much in common with a 2016 review of the implementation challenges of global antimicrobial stewardship by Tiong et al. [33]. They argue that while there is a lack of resources, many stewardship interventions themselves are categorically at odds with developing economies [33]. Specifically, they cite the balance between regulation and access to medications as an example of this disconnect between policy and practice [33]. We also agree with Van Dijick et al. [34], who state that the literature base surrounding stewardship interventions is heterogenous and complicated by a myriad of sociocultural paradigms unique to each country within SSA. Furthermore our main themes bear considerable resemblance to the findings of Kpokiri et al. and Huttner et al., who published studies analysing the implementation of antimicrobial stewardship programmes in Nigeria and across the globe, respectively [35,36]. In particular, we share their sentiment that further publication of evaluation of stewardship interventions, regardless of their success, is exceedingly valuable to inform future efforts.
When comparing literature it should be noted that the health systems of SSA are far from identical. One example is that in one study in Ethiopia all interviewed pharmacy workers either held a bachelor's degree in pharmacy (B. Pharm) or a diploma in pharmacy [20]. Meanwhile in Tanzania the level of qualification of pharmacy worker depends on the type of pharmacy and can range from a degree-level pharmacist to any individual with a medical background, such as a nurse [24]. Furthermore, the development and enforcement of antibiotic prescribing guidelines varies greatly between different countries in SSA, and few countries have a national AMR policy [7,37]. These differences reinforce the need to tailor stewardship interventions to individual countries.
Patient education, while out of the scope of this review, should also be considered when evaluating our findings. Torres et al. conducted a systematic scoping review of factors influencing self-medication with antibiotics in low and middle income countries [38]. They found that patients who possessed low or very high knowledge of antibiotics were the most likely to engage in self-medication [38]. Some of the papers in our study also discussed this, with Gebretekle et al. finding that those with specific knowledge on antibiotics were better equipped to specifically request them while Watkins et al. reported that very few patient interviewees knew of AMR as an issue [20,23]. While it has a small literature base, patient education impacts resistance-promoting behaviours and thus should be included both in future interventions and research efforts.
Intertwined with patient education on AMR is general health literacy [39][40][41]. This also varies among SSA countries, though there is little data in this field [39][40][41]. One study of 224,751 individuals from 14 SSA countries found an average prevalence of high health literacy of 37.55% and a range of 8.93% (Niger) to 63.89% (Namibia) [39]. A systematic review by Castro-Sánchez et al. [41] suggests that there is a relationship between health literacy and antibiotic usage, but it is complex and as yet not fully understood. Furthermore, this relationship does not appear to have been studied in SSA outside of South Africa [41]. Health literacy is therefore likely another important contextual factor in stewardship in need of further research.
While the included literature showed reasonable consensus on the levels of knowledge of AMR among staff, this is not replicated in wider literature. Labi et al. found that 8.9% (14/157) physicians in a Ghanaian tertiary care hospital considered AMR a threat locally, while Erku reports that 26.5% of 449 community pharmacists interviewed in Ethiopia believed that stewardship should be practised by drug stores [42,43]. Studies from both Ghana and Ethiopia in this review found that more than half of the healthcare staff interviewed at least acknowledged AMR as a threat [27,29]. This reinforces the fact that this literature base is far from complete and more data is required.
Limitations
This study has a number of limitations. The literature search was conducted in English meaning that manuscripts in other languages could have been missed. While translated papers were found and included in the abstract screening, none met the inclusion criteria. Equally, the lack of geographical coverage constitutes a reporting bias, as countries where AMR is considered a less important issue are less likely to commission research into it. We did not include articles specifically on veterinary practice, another important source of AMR [44]. Finally, there is the potential for publication bias. African nationals underrepresented in academia, and may find it more difficult to publish papers in major journals due to a lack of resources or a lack of interest on the part of the journals [45].
Conclusion
Antibiotic resistance is a growing problem and could significantly undermine healthcare in SSA. Lack of data is a major barrier to any public health interventions in this field. Therefore, wider surveillance and reporting of resistant infections along with further research into its underlying drivers are needed. Specifically, research in countries which are not currently included in the literature base should be prioritised. Moreover, funding, publication and evaluation of stewardship interventions, successful or not, could help inform future endeavours and inspire action among policy-makers. It is important to recognise that stewardship and resistance do not exist in isolation and are part of wider healthcare systems. Increased regulation seems an obvious course of action but must be balanced with continuing access to medications. Financial incentives to drug stores that comply with regulation, rather than closing those than do not, could be an acceptable middle ground in this regard. Equally, increased national and regional support for stewardship could improve its priority in a clinical setting. In summary, while increased health resources will help AMS efforts in SSA, specific interventions tailored to the unique context of the region, are also required.
Appendix 3: Original concepts
These were the original thematic concepts identified by GJP and then validated and consolidated into themes by GJP and the co-authors in light of the fully coded papers. This was reported in three papers. In many cases, issues such as tuberculosis, HIV and tropical diseases formed a much greater concern on wards. This meant that little attention was paid to resistant organisms [27,28] Clinical governance This was reported in many papers, and encompassed a lack of appropriate stewardship guidelines (or a lack of adherence to them) at the level of individual wards and pharmacies. There were many reasons for this, such as lack of knowledge by clinical staff or fear that prescribing a narrow-spectrum antibiotic would result in a negative career validation. It also involved pharmacies needing to make a profit, and doing so by selling prescriptiononly medication to patients without appropriate prescriptions [19,20,22,[24][25][26][27][28][29][30][31]
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2011-01-01T00:00:00.000
|
18530991
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://ojs.ptbioch.edu.pl/index.php/abp/article/download/2218/856",
"pdf_hash": "95a5679e6dd7ff4b5d4c272aea7820427d833989",
"pdf_src": "CiteSeerX",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:817",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "95a5679e6dd7ff4b5d4c272aea7820427d833989",
"year": 2011
}
|
pes2o/s2orc
|
Regular paper Vol. 58, No 4/2011
The current study was undertaken to elucidate a possible neuroprotective role of dehydroepiandrosterone (DHEA) against the development of Alzheimer's disease in experimental rat model. Alzheimer's disease was produced in young female ovariectomized rats by intraperitoneal administration of AlCl(3) (4.2 mg/kg body weight) daily for 12 weeks. Half of these animals also received orally DHEA (250 mg/kg body weight, three times weekly) for 18 weeks. Control groups of animals received either DHAE alone, or no DHEA, or were not ovariectomized. After such treatment the animals were analyzed for oxidative stress biomarkers such as hydrogen peroxide, nitric oxide and malondialdehyde, total antioxidant capacity, reduced glutathione, glutathione peroxidase, glutathione reductase, superoxide dismutase and catalase activities, antiapoptotic marker Bcl-2 and brain derived neurotrophic factor. Also brain cholinergic markers (acetylcholinesterase and acetylcholine) were determined. The results revealed significant increase in oxidative stress parameters associated with significant decrease in the antioxidant enzyme activities in Al-intoxicated ovariectomized rats. Significant depletion in brain Bcl-2 and brain-derived neurotrophic factor levels were also detected. Moreover, significant elevations in brain acetylcholinesterase activity accompanied with significant reduction in acetylcholine level were recorded. Significant amelioration in all investigated parameters was detected as a result of treatment of Al-intoxicated ovariectomized rats with DHEA. These results were confirmed by histological examination of brain sections. These results clearly indicate a neuroprotective effect of DHEA against Alzheimer's disease.
InTRoDuCTIon
Alzheimer's disease (AD) is a neurodegenerative disorder characterized clinically by progressive memory loss and subsequent dementia.AD proceeds at stages from mild and moderate to severe, and gradually destroys the brain.The pathological hallmarks of AD include accumulation of proteins (a massive accumulation of neurofibrillar tangles and β-amyloid), loss of neurons and synapses, proliferation of reactive astrocytes in the entorhinal cortex, hippocampus, amy-gdala and association areas of frontal, temporal, parietal and occipital cortex (Grosgen et al., 2010).
It has been reported that aluminum accumulates significantly in the hippocampus following chronic exposure to aluminum.Also, aluminum has been observed in neuritic deposits, β-amyloid plaques and neurofibrillar tangles in Alzheimer's brain.Chronic aluminum exposure is involved in the impairment of mitochondrial electron transport chain (ETC) and increased production of reactive oxygen species (ROS) (Kumar et al., 2008).Moreover, aluminum promotes the formation of β-amyloid plaques (Bharathi et al., 2008) and aggregation of tau proteins in Alzheimer's disease (Walton & Wang, 2009).
Dehydroepiandrosterone (DHEA) and its sulfate metabolite (DHEAS) are the major androgens secreted by the human adrenal gland.A decline in their production is the most characteristic age-related change in the adrenal cortex (Krysiak et al., 2008;Goel & Cappola, 2011).The integrity of neuroprotection is an important component against the development of cognitive disorders such as AD.DHEAS seems to have some positive metabolic and endocrine effects to delay brain aging by recovering the impairment of neuroprotective growth factors (Luppi et al., 2009;Lazaridis et al., 2011).Also, DHEA has antioxidant, antilipidperoxidative, antiinflammatory and thereby antiaging actions (Kumar et al., 2008).The possibility of using DHEA in management of various diseases has attracted considerable attention over recent years.Whereas DHEA therapy seems to be effective in treating patients with cognitive decline, depression, cardiovascular disease, osteoporosis and sexual dysfunctions, further research is needed to better assess the efficacy and safety of DHEA supplementation in patients with neurodegenerative disorders associated with advanced age (Krysiak et al., 2008).Therefore, it could be hypothesized that DHEA treatment could ameliorate or reduce the severity of symptoms of experimental AD induced in rodents.This could be achieved through measuring oxidative stress biomarkers, antioxidant status, antiapoptotic marker Bcl-2, neurotrophic factor BDNF and cholinergic markers.
MATERIAlS AnD METHoDS
Dehydroepiandrosterone and all chemicals were purchased from Sigma Co (USA) and aluminum chloride from BDH Laboratory Supplies, Poole (UK).
Experimental animals.Fifty young adult female Sprague-Dawley rats weighing 100-120 g were obtained from the Animal House Colony of the National Research Center, Giza and acclimated in a specific pathogen-free area at 25 ± 1 ºC and controlled constantly humidity (55 %) with a 12 h light/dark cycle.The rats were ovariectomized surgically in Hormones Dept, NRC and were housed with ad libitum access to standard laboratory diet consisting of 10 % casein, 4 % salt mixture 4 %, 1 % vitamin mixture, 10 % corn oil and 5 % cellulose, completed to 100 % with corn starch (A.O.A.C., 1995).Animals were cared for according to the guidelines for animal experiments by the Ethical Committee of NRC.
The animals were classified into five groups of 10 rats each.
Group one: Gonad-intact control (nonovariectomized) group treated with the vehicle (5 % dimethylsulfoxide (DMSO) in saline) three times a week for 18 weeks, six months after starting of the experiment.
Group two: Ovariectomized control group treated with the vehicle (5 % DMSO in saline) three times a week for 18 weeks, six months after surgical operation.
Group three: Ovariectomized experimental rats, receiving DHEA (dissolved in 5 % DMSO in saline) orally three times a week in a dose of 250 mg/Kg body weight (Lardy et al., 1999) for 18 weeks, six months after surgical operation.
Group four: Ovariectomized rats serving as Al-intoxicated control group were injected i.p. with aluminum chloride (AlCl 3 ) dissolved in distilled water daily for 12 weeks in a dose of 4.2 mg/Kg body weight (Julk & Gill 1996) three months after surgical operation and served as Al-intoxicated control group.
Group five: Ovariectomized rats, injected i.p. with AlCl 3 (4.2mg/kg body weight) daily for 12 weeks, three months after ovarectomy.Then, they received DHEA orally in a dose of 250 mg/Kg body weight three times weekly for 18 weeks.
Brain tissue sampling and preparation.At the end of the experiment, the rats were fasted overnight, subjected to anesthesia with diethyl ether and sacrificed.The whole brain of each rat was rapidly dissected, washed with isotonic saline and dried on filter paper.Each brain was divided sagitally into two portions.The first portion was weighed and homogenized in ice-cold medium containing 50 mM Tris/HCl and 300 mM sucrose at pH 7.4 to give a 10 % (w/v) homogenate (Tsakiris et al., 2004).This homogenate was centrifuged at 1 400 × g for 10 min at 4 °C.The supernatant was stored at -80 °C and used for biochemical analyses that included oxidative stress biomarkers (H 2 O 2 , NO and MDA), antioxidant status (TAC, GSH, GPx, GR, SOD and CAT), antiapoptotic marker (Bcl-2), neurotrophic factor (BDNF) and cholinergic markers (AchE and Ach).Also, brain total protein concentration was measured to express the concentration of different brain parameters per mg protein.The second portion of the brain was fixed in 10 % formalin for histological investigation.
The ethical conditions were applied such that the animals suffered no pain at any stage of the experiment and the study was approved by the Ethics Committee of the National Research Center.Animals were disposed of in bags provided by the Committee of Safety and Environmental Health, National Research Center.Biochemical analyses.Brain hydrogen peroxide (H 2 O 2 ) level was determined by the spectrophotometric method according to Aebi (1984).The assay is based on the reaction of H 2 O 2 in the presence of peroxidase with 3,5-dichloro-2-hydroxy-benzene sulfonic acid (DHBS) with 4-aminophenazone (AAP) to form a chromophore (quinoneimine dye).The color intensity of the chromophore) corresponds to the concentration of hydrogen peroxide in the sample which can be measured at 472 nm.
Lipid peroxidation products represented by malondialdehyde (MDA) were evaluated by the method of Satoh (1978) using thiobarbituric acid (TBA) and measuring the reaction product spectrophotometrically at 534 nm.
Brain nitric oxide (NO) level was assayed by the spectrophotometric method according to Berkels et al. (2004).Promega's griess reagent system is based on the chemical reaction between sulfanilamide and N-1-naphthylethylenediamine dihydrochloride under acidic condition (phosphoric acid) to give colored azo-compound which can be measured at 520-550 nm.
Brain total antioxidant capacity was assayed according to the method of Koracevic et al. (2001).The method is based on determination of the ability to eliminate added hydrogen peroxide.The remaining H 2 O 2 is determined colorimetrically by an enzymatic reaction converting 3,5-dichloro-2-hydroxyl benzenesulfonate to a colored product that is measured at 532 nm.
Brain glutathione (GSH) was measured colorimetrically according to the method of Moron et al. (1979).This method is based on determination of the relatively stable yellow color when 5,5'-dithiobis-2-nitrobenzoic acid (DTNB) is added to sulfhydryl compounds which can be measured at 503 nm.
Glutathione reductase (GR) was assayed colorimetrically according to the method of Erden and Bor (1984).The assay method is based on oxidation of NADPH which is followed at 340 nm.One unit of activity is defined as the oxidation of 1 nmole NADPH/min/mg protein.
Glutathione peroxidase (GPx) was determined colorimetrically according to the method of Ozdemir et al. (2005) using NADPH-coupled reduction of GSSG catalyzed by GR which can be measured at 340 nm.
Brain superoxide dismutase (SOD) activity was determined colorimetrically according to the method of Nishikimi et al. (1972).This assay relies on the ability of the enzyme to inhibit the phenazine methosulfate-mediated reduction of nitroblue tetrazolium dye which can be measured at 560 nm.
Brain catalase (CAT) activity was determined colorimetrically according to the method of Aebi (1984).The assay is based on catalase-catalyzed reaction of a known quantity of H 2 O 2 with DHBS and AAP to form a chromophore, which has a color intensity inversely proportional to the amount of catalase in the original sample which can be measured at 510 nm.
Brain Bcl-2 was detected by ELISA technique according to the method of Barbareschi et al. (1996).The assay utilizes an anti-Bcl-2 monoclonal antibody.Bcl-2 present in the sample binds to the antibody adsorbed to the microwells and a biotin-coniugated anti-Bcl-2 antibody is added to bind with Bcl-2 captured by the first antibody.Then, the unbound biotin-conjugated anti-Bcl-2 is removed during a wash step.Then, streptavidin-HRP is added and bound to the biotin-conjugated anti-Bcl-2.
Following incubation, unbound strepavidin-HRP is removed during a wash step and the substrate solution reacting with HRP is added to the wells.A colored product is formed proportionally to the amount of Bcl-2 present in the sample or the standards.The reaction is terminated by addition of acid and light absorbance is measured at 450 nm.
Brain BDNF was detected by ELISA technique according to the method of Barakat-Walter (1996).The assay is based on monoclonal antibody specific for BDNF precoated onto a microplate.When the standard and samples are pipetted into the wells, any BDNF present is bound by the immobilized antibody.Then, the enzyme-linked monoclonal antibody specific for BDNF is added to the wells and, following a wash to remove any unbound antibody enzyme, a substrate solution are added to the wells.The color develops in proportion to the amount of BDNF bound in the initial step.The color development is stopped and the intensity of the color can be measured at 450 nm.
Brain AchE was determined colorimetrically according to the method of Den Blaauwen et al. (1983).The method is based on acetylcholinesterase hydrolyzing acetylcholine to acetate and thiocholine, which in the presence of dithiobis-nitrobenzoate produces 2-nitromercapto-benzaote which can be measured at 405 nm.
Brain Ach level was measured colorimetrically according to the method of Oswald et al. (2008).The assay method is based on oxidation of free choline to betaine via the intermediate betaine aldehyde.The reaction generates products which can be measured at 570 nm.
Quantitative estimation of brain homogenate total protein was carried out according to the method of Lowry et al. (1951).
Histological examination. .The brain tissue was fixed in 10 % formalin for one week, washed in running tap water for 24 h and dehydrated in ascending series of ethanol (50-90 %), followed by absolute alcohol.The samples were cleared in xylene and immersed in a mixture of xylene and paraffin at 60 ºC.The tissue was then transferred to pure paraffin wax of the melting point 58 ºC and then mounted in blocks and left at 4 ºC.The paraffin blocks were sectioned on a microtome at thickness of 5 µm and mounted on clean glass slides and left in the oven at 40 ºC to dryness.The slides were deparafinized in xylene and then immersed in descending series of ethanol (90-50 %).The ordinary haematoxylin and eosin stain was used to stain the slides (Drury and Wallington, 1980).
Statistical analysis.The results were expressed as means ± standard error of the mean (SE).Data were analyzed by one way analysis of variance (ANOVA) using the Statistical Package for the Social Science (SPSS) program, version 11 followed by least significant difference (LSD) to compare significance between groups (Armitage and Berry, 1987).Difference was considered significant at P ≤ 0.05.
RESulTS
The results in Table 1 show the effect of DHEA on brain oxidative stress markers represented by H 2 O 2 , nitric oxide and MDA levels in ovariectomized and Alintoxicated ovariectomized rats.Ovariectomized control rats showed significant increase in brain H 2 O 2 , nitric oxide and MDA levels when compared to gonad-intact On the other hand, treatment of ovariectomized rats with DHEA induced significant enhancement in brain GSH, GPx and GR, and insignificant increase in brain SOD and CAT activities when compared to those in ovariectomized control rats.In comparison with ovariectomized control rats, daily administration of AlCl 3 in ovariectomized rats induced significant reduction in brain TAC, GSH, GPx and GR and insignificant inhi-bition in brain SOD and CAT.However, treatment of Al-intoxicated ovariectomized rats with DHEA produced significant elevation in brain TAC, GSH, GPx, GR and CAT activities and insignificant increase in brain SOD activity as compared to Al-intoxicated control rats.
The results in Table 3 show that ovariectomy resulted in significant decrease in brain levels of Bcl-2 and BDNF in comparison with gonad-intact control group.On the other hand, treatment of ovariectomized rats with DHEA produced significant increase in brain Bcl-2 and BDNF levels when compared with those in ovariectomized control group.Administration of AlCl 3 in ovariectomized rats led to significant reduction in brain Bcl-2 as well as BDNF levels as compared with those in ovariectomized control rats.The treatment of Al-intoxicated ovariectomized rats with DHEA caused significant increase in brain Bcl-2 and BDNF levels in comparison with Al-intoxicated control group.
The data in Table 4 demonstrate that ovariectomy caused insignificant increase in brain AchE activity and insignificant decrease in brain Ach level in comparison with gonad-intact control group.The treatment of ovariectomized rats with DHEA revealed insignificant decrease in brain AchE activity accompanied with insignificant increase in brain Ach level in comparison with ovariectomized control group.Aluminum administration in ovariectomized rats induced significant elevation in brain AchE activity and significant reduction in brain Ach level as compared with ovariectomized control rats.Treatment of Al-intoxicated ovariectomized rats with DHEA produced significant decrease in brain AchE activity accompanied with significant increase in brain Ach level in comparison with Al-intoxicated control group.
Microscopic examination of brain sections of gonadintact control rats (Fig. 1A) showed normal morphological structure of the hippocampus.Microscopic examination of brain of ovariectomized control rats (Fig. 1B) showed normal morphological structure of the hippocampus.Also microscopic examination of hippocampus of ovariectomized rats administrated with DHEA showed normal morphological structure (Fig. 1C).On the other hand, microscopic investigation of brain sections of ovariectomized Al-intoxicated rats demonstrated amyloid plaques of various sizes in the cerebral cortex and in the hippocampus (Fig. 1D).Histological investigation of brain section of Al-intoxicated ovariectomized rats treated with DHEA revealed more or less normal structure in the hippocampus, i.e., all amyloid plaques that were formed under the influence of ovariectomy combined with AlCl 3 administration disappeared following the treatment with this hormone (Fig. 1E).
DISCuSSIon
There is growing evidence that oxidative stress and estrogen deprivation after menopause or ovariectomy represent two main risk factors closely related to the development of Alzheimer's disease (Behl & Moosmann, 2002).Furthermore, aluminum has been implicated in aging-related changes and particularly in neurodegenerative diseases as it promotes the formation of β-amyloid plaques (Bharathi et al., 2008).
The present results demonstrate that there was significant elevation in brain H 2 O 2 , NO and MDA levels of ovariectomized rats administered with AlCl 3 .Tuneva et al. (2006) demonstrated an increase in ROS, including H 2 O 2 production in different brain areas due to Al exposure.(Huh et al., 2005).Aluminum could induce lipid peroxidation and alter the physiological and biochemical behavior of the living organism, a matter implicated in the increased brain MDA level (Kumar et al., 2008).The finding of significant elevation of brain NO level after AlCl 3 administration in ovariectomized rats is in agreement with the previous studies of the Garrel et al. (1994) and Guix et al. (2005).The NO elevation in brain tissue may be related to Al-induced nitric oxide synthase (NOS) activity with consequent increase in NO production in rat brain tissue and microglial cells (Guix et al., 2005).Those authors found that cerebellar levels of inducible NOS (iNOS) protein in rats was significantly elevated following both shortand long-term Al administration.It is obvious that treatment of Al-intoxicated ovariectomized rats with DHEA produced significant decrease in brain H 2 O 2 and MDA levels.These remarkable effects of DHEA may be related to DHEA inhibiting the monoamine oxidase (MAO) activity in brain.Considering the important role attributed to MAO activity in the generation of H 2 O 2 (Marklund et al., 1982), the inhibitory effect of DHEA on MAO activity can be regarded as a mechanism by which DHEA could reduce oxidative stress, production of H 2 O 2 and lipid peroxidation (Kumar et al., 2008).
The present results also revealed a marked decrease in brain NO level as a result of DHEA administration in ovariectomized and Al-intoxicated ovariectomized rats.DHEA has been found to inhibit NMDA-induced NO production and NO synthase (NOS) activity in hippocampus cell culture (Kurata et al., 2004).
Considering total antioxidant activity (TAC) and antioxidant enzyme activities, ovariectomized rats exhibited significant decrease in brain TAC.Oxidative stress resulting from ovariectomy might cause depression in the antioxidant enzyme activities and in gene expression necessary to maintain normal brain functioning (Vina et al., 2008).Also, significant decrease in brain TAC level was observed in Al-intoxicated ovariectomized rats.Aluminum has been shown to induce lipid peroxidation with depletion of several antioxidant enzymes (Mahieu et al., 2009).Long term exposure to oxidative stress due to Al exposure leads to exhaustion of antioxidative enzymes.
DHEA administration revealed significant increase in brain TAC in Al-intoxicated ovariectomized rats.DHEA exhibits antioxidant properties in experimental systems (Aragno et al., 1999).Several explanations have been put forward for multitargeted antioxidant effects of DHEA, including its upregulating effect on catalase expression (Yildirim et al., 2003) and activity (Schwartz et al., 1988), as well as its activating action on the thioredoxin system (Gao et al., 2005).DHEA could also suppress superoxide anion production (Mohan & Jacobson, 1993).
Remarkable decrease was recorded in brain GSH, GPx, GR,SOD and CAT activities in both ovariectomized rats and Al-intoxicated ovariectomized rats.Munoz-Castaneda et al. (2006) showed that the lack of estrogens by ovariectomy induced reduction of the antioxidant status (GSH, SOD and GPx) accompanied by elevated lipid peroxides in rats.A drastic depletion of brain GSH may be due to the increased cytotoxicity of H 2 O 2 in endothelial cells as a result of inhibition of glutathione reductase (Yousif &El-Rigal, 2004 andEl-Rigal et al., 2006).The significant depletion of GR, GSH and GPx in brain of ovariectomized rats indicates the damage of the second line of antioxidant defense system.This probably further exacerbates oxidative damage via adverse affect on critical GSH-related processes.Reduced antioxidant status as a result of increased ROS production in experimental ovariectomy has been reported previously (Li et al., 2008;Yu et al., 2008).Aluminum exposure causes impairment of the antioxidant defense system that may lead to oxidative stress (Kumar et al., 2009a,b).Aluminum causes brain damage via ROS more than any other organ because of its high lipid content, high oxygen turnover, low mitotic rate as well as low antioxidant concentration (Di et al., 2006a).The study of Di et al. (2006b) suggested that lower SOD activity in the brain due to Al exposure may be due to the altered conformation of SOD molecule as a result of Al-SOD complex formation.
Administration of DHEA in ovariectomized and Alintoxicated ovariectomized rats showed detectable increase in brain GSH, GPx, GR, SOD and CAT activities.It has been reported that the natural steroid hormone dehydroepiandrosterone-3β-sulfate (DHEAS) is a specific activator of peroxisome proliferator-activated receptor α (PPARα) (Peters et al., 1996).Activation of PPARα in vivo causes an upregulation of mRNA and protein levels of a number of peroxisomal and non-peroxisomeassociated enzymes and structural proteins, among them the antioxidant enzymes CAT and Cu,Zn-superoxide dismutase, as well as mediators of the glutathione pathway (Devchand et al., 1996).
Regarding the antiapoptotic marker (Bcl-2) and brainderived neurotrophic factor (BDNF) levels, the present data showed significant decrease in brain levels of Bcl-2 and BDNF in ovariectomized rats and Al-intoxicated ovariectomized rats.Sharma and Mehra (2008) stated that ovariectomy decreased Bcl-2 expression and increased proapoptotic marker (Bax) expression in the rat hippocampus.Altered Bax/Bcl-2 ratio is critical to Alinduced apoptosis (Johnson et al., 2005) leading to activation of caspase-3 and release of cytochrome c.Kumar et al. (2009b) reported that Al increases p53 protein expression by activating p38 MAPK to initiate apoptosis and this is accompanied by a marked inhibition of Bcl-2 and increased Bax expression.Takuma et al. (2007) showed marked decrease in the BDNF mRNA level in the hippocampus due to ovariectomy in mice.Disruption of the proinflamatory cytokine/neurotrophin balance by Al plays an important role in the neurodegenerative disease (Nagatsu et al., 2000).
DHEA administration in ovariectomized and Alintoxicated ovariectomized rats resulted in significant increase in brain Bcl-2 and BDNF levels.The mechanism by which DHEA could stimulate Bcl-2 expression is that DHEA binds to and activates G-protein coupled membrane receptor alpha inhibitory subunit (Gαi) that, in turn, activates protooncogenic tyrosine kinase c (Src), protein kinase C (PKC) and MAPK/ERK pathway.These kinases activate the prosurvival transcription factors CREB which stimulate the expression of antiapoptotic proteins such as Bcl-2 and Bcl-xl (Charalampopoulos et al., 2006).Therefore, DHEA could increase Bcl-2 level and stimulate Bcl-2 function.Several transcription factors contributing to the regulation of BDNF promoters have been characterized and CREB is one of them (Tabuchi et al., 2002).
With respect to cholinergic markers, the present results showed significant increase in brain activity of AchE with concomitant decrease in Ach level in both ovariectomized rats and Al-intoxicated ovariectomized rats.Zheng et al. (2009) reported increased AchE activity in Al-overloaded rats.Kaizer et al. (2008) suggested that Al exposure increased AchE activity via allosteric interaction between Al and the peripheral anionic site of the enzyme molecule, leading to the etiology of AD pathological deterioration.Al exerts cholinotoxic effects by blocking the provision of acetyl-CoA, which is required for Ach synthesis or by impairing the activities of choline acetyl transferase (ChAT) itself (Alleva et al., 1998).
The data in the current study revealed that DHEA administration produced significant decrease in brain AchE activity associated with significant increase in brain Ach level in Al-intoxicated ovariectomized rats.It has been demonstrated that DHEAS significantly increases Ach release in the hippocampus (Rhodes et al., 1996).Thus, the promoting effect of DHEAS on Ach release in the Neuroprotective effects of DHEA hippocampus may be one mechanism for its memoryenhancing effect (Zheng, 2009).
Microscopic examination of brain sections of ovariectomized rats showed that ovariectomy did not produce any histological changes in the hippocampus and this finding is in agreement with that of Van Groen and Kadish (2005).On the other hand, microscopic investigation of brains of Al-intoxicated ovariectomized rats revealed the presence of β-amyloid plaques in the cerebral cortex and the hippocampus.In accordance with our results, Abd El-Rahman (2003) demonstrated that Al administration causes the appearance of neuritic plaques with dark a center in the hippocampus, typical for the Alzheimer's disease.
Treatment of Al-intoxicated ovariectomized rats with DHEA revealed more or less normal structure of the hippocampus, i.e., most of β-amyloid plaques that were formed under the effect of AlCl 3 administration disappeared under the influence of this hormone.This result is in agreement with Cardounel et al. (1999) who observed that DHEA can protect against β-amyloid toxicity in hippocampal cells.
In summary, the present study demonstrates significant increase in brain oxidative stress parameters and significant decrease in brain TAC, antioxidant enzyme activities, brain Bcl-2 and BDNF levels in Al-intoxicated ovariectomized rats.Also, significant decrease in brain Ach level accompanied with significant increase in brain AchE activity were detected in Al-intoxicated ovariectomized rats.Microscopic investigation of brain sections of Al-intoxicated ovariectomized rats demonstrated formation of β-amyloid plaque in the cerebral cortex and in the hippocampus.DHEA treatment produced significant amelioration in brain oxidative stress markers, activation of the antioxidant enzymes, enhancement of brain Bcl-2, BDNF and acetylcholine levels.Histological investigation of brain sections of Al-intoxicated ovariectomized rats treated with DHEA revealed more or less normal structure of the hippocampus .Thus, it can be concluded that DHEA has a potent role in modulating the neurodegeneration characterizing AD through its antioxidant, antiapoptotic, neurotrophic properties and antiamyloidogenic effect as well as its cholinesterase -inhibiting activity.
Figure 1 .
Figure 1.Micrographs of brain sections.Magnification x 40.(A) Gonad-intact control showing normal morphological structure of the hippocampus.(B) Ovariectomized control rat showing normal morphological structure of the hippocampus.(C) DHEA treated ovariectomized rat showing normal morphological structure of the hippocampus (HP).(D) Al-intoxicated ovariectomized rat showing various sizes of amyloid plaques (arrow) in the cerebral cortex and hippocampus (HP).(E) Al-intoxicated ovariectomized rat treated with DHEA showing normal morphological structure of the hippocampus (HP).
Table 1 . Effect of DHEA treatment on brain oxidative stress parameters in ovariectomized and Al-intoxicated ovariectomized rats.
Data are represented as mean ± S.E. of 10 female rats/group.a Significant change in comparison with gonad-intact control group.b Significant change in comparison with ovariectomized control group.c Significant change in comparison with Al-intoxicated control group.(%) Percent of difference with respect to corresponding control value.
Table 2 . Effect of DHEA treatment on brain antioxidant status in ovariectomized and Al-intoxicated ovariectomized rats.
group.On the other hand, treatment of ovariectomized rats with DHEA recorded significant decrease in brain NO and insignificant reduction in brain H 2 O 2 and MDA levels as compared with ovariectomized control group.In addition, daily administration of AlCl 3 to ovariectomized rats showed significant elevation in all oxidative stress biomarkers (H 2 O 2 , NO and MDA) when compared to ovariectomized control group.Treatment of Al-intoxicated ovariectomized rats with DHEA produced significant reduction in brain H 2 O 2 , NO and MDA levels when compared with Al-intoxicated control rats.The data in Table2demonstrate that ovariectomy induced significant reduction in brain TAC, GSH, GPx, GR and SOD activities in comparison with gonad-intact control group.Brain CAT activity was decreased insignificantly by ovariectomy as compared to gonad-intact control group.
Data are represented as mean ± S.E. of 10 female rats/group.aSignificant change in comparison with gonad-intact control group.bSignificant change in comparison with ovariectomized control group.cSignificant change in comparison with Al-intoxicated control group.(%)Percent of difference with respect to corresponding control value.control
Table 4 . Effect of DHEA treatment on brain acetylcholinesterase (AchE) and acetylcholine (Ach) in ovariectomized and Al-intoxi- cated ovariectomized rats.
Data are represented as mean ± S.E. of 10 rats/group.b Significant change in comparison with ovariectomized control group.c Significant change in comparison with Al-intoxicated control group.(%) Percent of difference with respect to corresponding control value Also, Al could increase the activity of monoamine oxidase (MAO) in the brain, which leads to increased generation of H 2 O 2
|
v3-fos-license
|
2019-10-20T13:01:09.391Z
|
2019-10-01T00:00:00.000
|
204787482
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosmedicine/article/file?id=10.1371/journal.pmed.1002896&type=printable",
"pdf_hash": "63d604d1734dc83345436f8ae8c983eed10e7762",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:818",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "3ff3e1e1c2a543ee5d9a18baab035420d98a08b2",
"year": 2019
}
|
pes2o/s2orc
|
Advances in clinical trial design for development of new TB treatments—Translating international tuberculosis treatment guidelines into national strategic plans: Experiences from Belarus, South Africa, and Vietnam
1 Department of Tuberculosis, The International Union Against TB and Lung Disease, Geneva, Switzerland, 2 National Lung Hospital, Vietnam NTP, Vietnam, 3 Republican Scientific and Practical Centre for Pulmonology and TB, Minsk, Belarus, 4 Drug-Resistant TB, TB and HIV directorate, National Department of Health, Pretoria, South Africa, 5 Global Tuberculosis Programme, World Health Organization, Geneva, Switzerland
• Over the last 5 years, multiple advances in diagnosis and treatment of tuberculosis (TB) have resulted in a number of new WHO guidelines for TB care, but these recent guidelines have not always been implemented in a timely fashion, raising issues in their adoption and scale-up at country level.
• We discuss the experiences of three countries with a high burden of multidrug-resistant TB (MDR-TB)-Belarus, South Africa, and Vietnam-in implementing recent WHO guidelines on bedaquiline, a drug recently registered and recommended for the treatment of MDR-TB and the standardised shorter treatment regimen (STR) for MDR-TB.
• The process of adopting and implementing new guidelines requires national TB programmes (NTPs) to interact with multiple agencies: both intergovernmental departments and external agencies such as regulators and donors. These processes are country specific, but there are some generalised challenges that NTPs in high-burden countries experienced when implementing recent WHO MDR-TB guidance.
• With multiple trials of new regimens for MDR-TB and new classes of drugs in the clinical treatment pipeline, the frequency of new guidelines for TB is expected to increase, and it is important to support NTPs to implement and scale-up these new developments in treatment.
Introduction
One of the key missions of national tuberculosis (TB) programmes (NTPs) is to issue policy and technical guidance for clinicians and healthcare workers involved in TB care at the country level. These national policies are generally developed based on international public health guidelines, such as those issued by the World Health Organization (WHO) [1,2]. Updating national policies or technical guidelines in view of recent advances in TB diagnosis, care, and prevention has an important impact on TB patients, the health system, the community and is key to ensuring the best quality of care for people with TB.
WHO has a mandate to provide technical assistance to its Member States on different aspects of public health. The 13th General Programme of Work of WHO [3] outlines the organisation's status as a science-and evidence-based agency setting global norms and standards, with a focus on public health. Translating research findings into policies may be a challenging task, given that the design of clinical studies may not always address the main public health priority directly, and recommended interventions require substantial adaptation to the particular programme conditions and settings [4].
In 2007, WHO established the Guideline Review Committee (GRC) to provide oversight to organisational efforts to ensure that policy guidance is up-to-date, trustworthy, feasible, and developed in a transparent way, in line with the highest international standards of care [5], and adheres to WHO principles for policy development [6]. The WHO-convened Guideline Development Group advises on the scope of the guidelines, assesses the quality of available evidence, and formulates recommendations using a systematic method termed Grading of Recommendations Assessment, Development, and Evaluation (GRADE) [7]. This approach requires experts who are formulating recommendations to base their judgements not only on trial evidence but also on other considerations, such as the balance of expected desirable and undesirable effects, equity, resource use, feasibility, and acceptability to the populations targeted by the guidance. These changes have contributed to an improvement in purpose, clarity, and the methodological quality of WHO guidelines in the last decade [7].
The pace of developments in new TB diagnostics, treatment, and patient support has increased substantially over the last decade, leading to the release of over 20 new or updated WHO guidelines on different aspects of TB care since 2010 [8]. This pace is expected to continue, and the PLOS Medicine Collection of which this paper is part [9] discusses the optimal characteristics of clinical trial designs to inform future policy guidance for new TB regimens.
Already in the last 5 years, NTPs have had to respond to a number of WHO policy updates on multidrug-resistant TB (MDR-TB) treatment as new medicines became available and results from studies on the use of novel drugs and the standardised shorter treatment regimen (STR) were communicated (e.g., bedaquiline, delamanid, and the 9-12-month-shorter MDR-TB regimen) [10][11][12][13][14][15][16]. Partly as a result of these rapid changes, a number of these new treatment policies have not been adopted or fully implemented by national programmes. A recent review [17] of national policies in 29 countries highlighted national policy gaps when compared to WHO policies. Thus, in the case of WHO's recommended 9-12-month-shorter MDR-TB regimen, 45% of the countries had developed policies, but only 69% of those countries had implemented them. By the end of 2017, 62 countries, mostly in Africa and Asia, reported having used shorter MDR-TB regimens; between 2016 and 2017, the number of patients reported to have been started on the 9-12-month-shorter regimen globally increased from 2,400 to 10,000 [18]. With regard to the new drugs, bedaquiline and delamanid, 86% of countries had a policy on bedaquiline and 67% on delamanid, but the actual use of the new drugs reflected the implementation gap, with only 12,194 and 976 treatment courses procured globally for bedaquiline and delamanid, respectively, in 2017 [19].
There are multiple barriers to the adoption of international treatment guidelines, including factors relating to the acceptability and perceived feasibility of the recommendation, the individual opinion of clinicians, patient preferences, regulatory processes for new drugs, requirement for new resources, and the financial and political commitment from the Ministry of Health (MOH) [20].
The following case studies from the NTPs of three high-burden countries refer to national experiences in the introduction of new drugs and regimens for MDR-TB to illustrate how countries approached implementation of new policies for TB treatment. Belarus, South Africa, and Vietnam are all on WHO's high-burden MDR-TB list but with different epidemic patterns (see Table 1). The case studies review the experiences of the countries in implementing the interim guidance for the use of bedaquiline in the treatment of MDR-TB, issued by WHO in 2013 [21], and the revised guidelines on treatment of MDR-TB issued in 2016 that recommend the use of the 9-12-month-shorter MDR-TB regimen under certain conditions [13].
Implementation of bedaquiline in Belarus
In 2017, there were an estimated 3,500 new TB cases in Belarus of which 2,500 had rifampicin resistance or MDR-TB [18]. In 2012, in anticipation of the approval of a new drug for TB, WHO released a handbook to advise countries on how to organise both spontaneous and active pharmacovigilance [22]. The national pharmacovigilance centre of the Belarus MOH, with its prior experience in active pharmacovigilance in the country for antiretrovirals [23], established strong links with the NTP to enhance pharmacovigilance among MDR-TB patients. The implementation of cohort event monitoring for MDR-TB treatment on regimens containing linezolid, and later bedaquiline, were labour-intensive activities for MOH staff, undertaken without additional resources [24] (Table 2).
In mid-2013, the national TB guidelines were updated in alignment with the new WHO policy on bedaquiline use (including translation into the Russian language) and staff training organised by the MOH under the guidance of the MDR-TB expert group (consilium). The MDR-TB consilium is a platform of multidisciplinary experts from Belarus with the aim to improve the quality of diagnosis and care and to reduce the time to initiation of effective MDR-TB treatment throughout the country. The NTP also benefited from reviews of its work [25,26]. An important challenge faced by the MOH when implementing bedaquiline was for healthcare staff to adhere to proper criteria when selecting patients to be placed on regimens including this new agent. The MDR-TB expert consilium played an important role to ensure compliance. Another limitation was to have all the medicines needed for the regimen available at the time of start of treatment: this required coordination with all stakeholders (i.e., funders, logistics, facilities) to limit delays. The WHO-recommended 9-12 month STR MDR-TB regimen in Belarus is contraindicated in many because MDR-TB patients commonly have strains harbouring additional resistance to pyrazinamide and to key second-line drugs such as fluoroquinolones and injectable agents. This is why the focus has been on scaling up the use of bedaquiline, with other second-line drugs that have not been previously used in Belarus. Since late 2018, the NTP introduced under operational research conditions a shorter regimen of 9 months consisting of all group A and B medicines recommended in MDR-TB regimens.
In 2015, following WHO advice on active TB drug safety monitoring and management (aDSM) in patients treated with novel regimens and repurposed medicines [27], Belarus became an early adopter of aDSM as a standard of care and among the first countries to contribute records to WHO's global aDSM database [28]. Using domestic and external funding, the Belarus MOH is updating the national electronic TB patient register to enhance future data management. The articulated response from the MOH, including strengthening the surveillance and preventive and curative components of the NTP [29], has resulted in high case detection of TB, TB/HIV, and drug-resistant TB and treatment success in new and relapsed TB patients approaching 90% [30].
Introducing bedaquiline in South Africa
South Africa is a country with high TB, MDR-TB, and HIV burden. The country contributes approximately 10% of global MDR-TB cases diagnosed and reported, with treatment success similar to the global rate at 54% and mortality at just above 20% [18].
The use of bedaquiline in the country started in December 2012, when the South Africa Medicines Control Council (MCC) approved the drug as part of a clinical access programme [31]. The programme was implemented at five sites and was later scaled up to 12 sites in 2014 after early successful results were obtained [32]. Once bedaquiline received full registration with the MCC, the inclusion criteria were broadened, and from 2017, bedaquiline use was decentralised to the district level to facilitate scale-up (Fig 1). In June 2018, South Africa announced that bedaquiline would be available to all eligible patients with rifampicin resistance, replacing the injectable agents in both the recent WHO-recommended longer treatment regimens as well as variants of the STR [26]. The STR has been included in national policies since 2015 [33], but similar to Belarus, the eligibility criteria for the STR have meant that its use has been limited in a population with increasingly complex resistance patterns. However, since September 2018, the South African NTP recommended a modified injectable-free STR nationwide. This regimen has the addition of linezolid for 2 months, with bedaquiline replacing the injectable agent and given for 6 months and levofloxacin replacing moxifloxacin [34].
The primary challenge to adoption and implementation of bedaquiline use has been the full regulatory approval required from the MCC, as the initial approval was only for a compassionate-use programme. The process to reach full regulatory approval took 18 months. Once registered, there was hesitancy of clinicians on the use of a new drug for which programmatic data were initially extremely limited. Subsequently, data were collected from pilot sites and published. A National Clinical Advisory Committee was formed to support implementation of WHO guidance by helping physicians design effective treatment regimens and establishing provincial committees to discuss difficult clinical cases. The NTP discussed WHO guidelines with local researchers and academia to ensure the guidance was customised to the national context and translated into practice. An additional challenge to the scale-up was maintaining a secure supply of stocks, particularly as bedaquiline was not on national tender.
Improving diagnosis and treatment of MDR-TB in Vietnam
Vietnam is one of the 20 countries considered to have both a high TB and MDR-TB burden [18]. In 2016, Vietnam had 106,527 registered cases of TB, and it is estimated that 20% of cases are not detected [18]. To address this problem, the NTP developed the 2X strategy (Xray- Xpert MTB/RIF) to enhance early TB and MDR-TB detection. This strategy, in line with WHO guidance on the use of Xpert MTB/RIF [35][36] and chest radiography [37], aims to screen for and confirm TB infection and disease, including rifampicin resistance status, at the start of treatment.
The scale-up of newer diagnostics was coupled with a patient triage strategy with bedaquiline and the STR part of the strategy. As clofazimine, a key drug in the shorter regimen and a companion drug to bedaquiline, was not registered in the country, the NTP had to apply for an investigation study to be approved by the institutional review board of the MOH so as to allow importation of the drugs needed. Bedaquiline was introduced under import waiver in December 2015 with the shorter treatment regimen introduced in April 2016, in three pilot provinces, and with the implementation for the STR expanded to an additional eight provinces after 18 months [38]. The expansion occurred after WHO's recommendations on the shortcourse regimen in 2017 [14]. During this stepwise scale-up of the use of bedaquiline and the STR, the scale-up was interrupted because of a 7-month interruption pending MOH approval of the expansion. During this time, the STR enrolment declined from 32% to 11%; and bedaquiline use in those eligible declined from 92% to 40% (Fig 2). Following these pilots, the STR was included in the national guidance in 2018 and is now a major treatment option for MDR-TB countrywide.
The long-term plan in Vietnam is to continue to scale up the use of bedaquiline. Based on local cohort studies, laboratory capacity was available to identify susceptibility of almost all drugs before indication of the regimen for individual patients, and the Vietnam NTP decided to apply modified STR as the primary regimen to treat drug-resistant TB. The planned stepwise scale-up of the modified shorter treatment regimen for drug-resistant TB treatment is shown in Fig 3. In order to overcome challenges regarding drug importation for bedaquiline, the drug has now been registered in 2019 for compassionate use while the main regulatory process is underway.
Policy change in Vietnam requires a stepwise approach, utilising pilot projects with scaleup happening over a 3-4-year timeline. At the same time of implementing pilot projects, the NTP negotiates in-country drug registration processes. The involvement of the WHO country office with technical assistance and support for policy change has helped to minimise delays in these processes.
Discussion
WHO guidance strives to make recommendations that are based on the best and latest available evidence and that have applicability to diverse settings worldwide. The use of standardised evaluation methods like GRADE aims to assess study findings in a rigorous way but also ensure that due considerations for implementation are being addressed. However, WHO's guideline processes cannot consider the nuances and sensitivities of the local socioeconomic, regulatory, and cultural conditions-this is left to the NTP when reviewing the guidance. As shown in the case studies described here, translating the research findings underlying new WHO guidance into programmatic guidance incurs substantial logistical challenges and delays for NTPs to mobilise the necessary resources and negotiate the regulatory framework. As in the three country examples, the process of adapting the recent WHO guidance on bedaquiline to the national situation is a multistage process, involving actors outside the NTP, such as donors and regulatory authorities, and is prone to delays.
The case studies highlight the challenges of introducing a new drug, particularly one with limited data on effectiveness and no long-term outcome data. The NTPs had to complete the necessary ethical, surveillance, and regulatory processes, and often pilot projects had to be undertaken to obtain real-life experience in the country, delaying the scale-up of the new drug (see Fig 4).
At the same time as new drugs were recommended to be added to the longer individualised regimen, WHO recommended a shorter standardised regimen for certain types of MDR-TB. NTP managers and staff had to work out how to implement the new drugs into their programmes as well as into a new treatment regimen, and this often required collecting data on efficacy and safety of both a new drug and a new regimen. Similarly, they had to ensure necessary funding not only to support the policy change process but also to procure the new drugs and the components of the standardised regimen, implement robust aDSM, and organise technical assistance or training for implementing the new policies. This required consideration of either national or donor resources, further adding to the implementation timeline, particularly for low-and middle-income countries that rely on the Global Fund and other donors to support their MDR-TB programmes. The recent update to the MDR-TB guidelines continues to recommend this dual approach of longer individualised regimens and more standardised shorter regimens [39].
To ensure that these new developments reach all relevant at-risk groups, the NTP needs to further engage with the national Ministry of Justice, Ministry of Migration, or other specific ministries. In countries that have placed TB high on the political agenda-such as Belarus, South Africa, and Vietnam-support for this engagement with other ministries may be easier than for other countries whose NTP may not have the support to engage with other ministries and national processes. This policy update process needs to be repeated with the latest WHO guidance on MDR-TB [40], which has a number of significant changes for the NTP to consider. Bedaquiline scale-up and use will continue, as now bedaquiline is a group A drug (group A drugs are drugs that are strongly recommended for inclusion in a longer MDR-TB regimen) and as such is a key component of the new all-oral individualised long regimen [26]. The STR remains in the recommendations with a change in the injectable agent being used. With the welcome push for an all-oral regimen for MDR-TB, NTPs may want to consider operational research into the role of oral alternatives to the injectable agent in the STR, as has been done in South Africa, Belarus, and Vietnam. With another new drug, pretomanid [18], submitted for registration, and new regimens being recommended for latent TB infection (LTBI), the lessons learned implementing new or unregistered drugs and new regimens for MDR-TB will aid NTPs to ensure these new developments are adopted and scaled up, potentially using the pathways used for bedaquiline and the STR uptake.
Conclusion
The experience of Belarus, South Africa, and Vietnam suggests that intergovernmental collaboration and new guideline adoption and implementation are facilitated when TB has been placed high on the political agenda, in contrast to other countries where TB maintains a much lower profile. The pathways and tools developed by NTPs to implement the new TB drugs and regimens for MDR-TB can help ensure that the latest WHO guidance on MDR-TB and LTBI can be implemented and scaled up quickly. With strengthened programmes (including implementation of aDSM), NTPs can generate the evidence to show whether new drugs and regimens found to be effective in clinical trials will work in populations that need them most [40].
With the TB drug and regimen pipeline at its healthiest in over a decade, advances in all areas of TB care are expected in the next decade requiring national guidelines to adapt as a priority. More updates to new guidance issued recently by WHO for the treatment of MDR-TB and LTBI are expected imminently as new drugs are submitted for registration, as well as results from new regimen studies being published in the coming years. A culture of change needs to be fostered and budgeted for and recognition needs to be given to countries that have supported their NTPs in this process. All actors in TB care, from international donors to national funding and regulatory agencies, need to support this approach to change, reacting promptly to and supporting new developments in TB therapeutics. The political attention to TB at the recent UN high-level meeting on TB [41] must be followed up with the appropriate funding and policy support so that NTPs are supported to rapidly review and adopt the best standard of care for people with TB. A systematic approach to evaluate how policies are used and adapted by countries and their impact-both as intended and inadvertentwould be a fruitful step in the feedback cycle that WHO and other professional bodies use when planning updates of new policy guidelines.
|
v3-fos-license
|
2019-05-20T13:06:56.912Z
|
2018-12-30T00:00:00.000
|
158151341
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://ijels.com/upload_document/issue_files/55-IJELS-DEC-2018-23-FosteringLearner.pdf",
"pdf_hash": "a18c6f1e17c77c190c15e08f4646f8e908749313",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:819",
"s2fieldsofstudy": [
"Education"
],
"sha1": "a18c6f1e17c77c190c15e08f4646f8e908749313",
"year": 2018
}
|
pes2o/s2orc
|
Fostering Learner Autonomy in ESL Teaching
As we all know, changes in the field of language teaching have never stopped. Among the changes that took place in recent years, the main one has been a shift o f focus from teachers onto the language learners. Learner autonomy is the new ‘buzz-word’ in the field of applied linguistics. How to cultivate LA becomes a key concern for educators and researchers. In order to know whether the LA could be cultivated and the cultivation of LA could benefit the students or not, the author carried out an experiment in Grade One in the Mathematics and Information School in Shandong University of Technology. The experiment was carried out in one year in two classes. The instruments used in the experiment were a questionnaire and three examination papers. The questionnaire which was adapted from Nunan (1996) and modified by the writer included 27 items concerning autonomous learning. The results of the questionnaire and the grades of the three examinations were collected and analyzed to find out whether LA could be cultivated and whether the cultivation of autonomous learning would benefit the students’ English learning. Analysis of the quantitative data was performed on the computer using SPSS . Our conclusion is that LA could be cultivated and the cultivation of LA benefited the students’ English learning. The thesis included the methodology used in the experiment, the procedure, data analysis and pedagogical implications we could get from the study. Keywords— learner autonomy, autonomous learning,
INTRODUCTION
All language teachers have been seeking the most effective way to help their students be more proficient in language learning, and have tried one method after another.
It was realized that they had long been pursuing a perfect teaching method, wh ich attached much importance to only one side of learningthe teacher, while the other side, the subject of learningthe large number of learners, was neglected.Being aware of this, many language teachers gradually began to develop their interest in considering the task fro m the learner's point of view and shift their focus of classroom fro m a teacher-centered one to a learner-centered one.Learner autonomy, which is the central point o f my thesis, refers to the ability to take full responsibility for the decisions with one's own learning and the accomplishment of those decisions (Dickinson 1987:11).In the classroom, instead of being passively guided by the teacher, the student tries to get the best out of classroom teaching according to both the teacher's and his own objectives.Outside the classroom, he makes reasonable plans concerning his learning and implements these plans.
II.
A RESEARCH ON FOSTERING LEARNER AUTONOMY 2.1 Research Questions and Hypothesis
Research Questions
The study reported here adopted a case study approach to investigate current ELT in Ch ina fo r both inside and outside English classroom fro m the perspective of learner autonomy.The study is intended to find answers to the following research questions: 1. Can LA be cultivated?
2. Will the cultivation of LA benefit the students' English Learning?
.6.55 ISSN: 2456-7620
This study attempted to test the hypothesis.The hypothesis is put forward on the basis of field research.
A. Alternative hypothesis (H1):
LA can be cultivated and the cultivation of LA will benefit the students' English Learning.
B. Null hypothesis (H0):
LA cannot be cultivated and the cu ltivation of LA will not benefit the students' English Learning.
Questionnaire
Adapted from Nunan (1996) wh ich gives an example of the type of activities that could take place in class to sensitize learners to their learning styles, the questionnaire includes 27 items (Appendix).The author of this thesis made some necessary changes combining the questionnaire on learning strategies.All these 27 items tested the students on their motivation, the style of classroom organizat ion, cognitive strategies, metacognitive strategies; communicat ive strategies and resource strategies (see Table 1).Each of these items is followed by five alternatives on a 5-point Likert scale scoring fro m 1(strongly agree) to 5 (strongly disagree).In order to have the content validity of the measure, it was given to three other experienced English teachers who all wo rk on applied linguistics for their co mments.They suggested some modifications.And upon their recommendation, some items were revised.
Table .1:Questionnaire Items within Each Category
Tests
The students took part in three examinations.The three test papers used for the examinations were all made in groups by the experienced teachers in Shandong University of Technology.The students could be tested in listening, reading, and writing.The students could be tested on integrating skills in using English.In the author's opin ion, there are three reasons to prove that the papers are valid.
First, they were made in groups by the experienced teachers.
They were asked to reflect the co mmon level o f the students.Second, they had the same style as CET-4, wh ich is considered to be the most widely acceptable way to test students' level.Although some people argued that it could not reflect the learners' real English level, CET is still a very important part in the college examination.Third, all the students in Grade one used the same test papers and the papers were read over and given marks by the teachers in groups.
classes were chosen.The same English teacher, the author of the thesis, taught both of the t wo classes.In the Experimental Class (EC), the teacher tried to arouse the students' interest in learning English, get them to know the importance and aim o f English learning, ask them to make plans for their learning and monitor the carry ing out of the learning p lans (details in 2.4.2).The teacher also kept abreast of the students learning styles and trained the students for learning strategies while giving them lessons.
Then at last, the students together with the teacher assessed the results of their learning.While the Control class (CC) just had the regular classes.
The questionnaires were handed out to the students three times to find out whether LA could be cult ivated .The Pre-test and the Mid-test questionnaires were handed out before the mid-term and the end-term examination in the first term.The post-test questionnaires were handed out before the end-term examination in the second term.Each time after the questionnaires were g iven to the students, the students had an examination and the marks were collected.
The three examinations were the mid -term (Test 1), the final-term (Test 2) examinat ions in the first term and the final-term examination (Test 3) in the second term.
After collecting all the data needed, analysis was made accord ing to the test papers marks and the questionnaire results.
Teaching Methods and Activities
In the procedure of developing the learners into independent learners, the author used the teaching methods and activities stated in the fo llo wing 12 items in everyday English teaching.Emphasis was put on the shift of responsibilit ies, active learn ing, cooperative learning and the extended reading materials the learners should refer to l. Making a proper plan at the beginning of a new term.
Supervising its implement both by the teachers and the learners themselves.The supervising process would raise the learners' awareness that responsibility for learning rests with them.
2. Picking out some passages from the textbook and ask the students to act as teachers and teach the passages to the other students.Before teaching, the students must make good preparations including the content of the passages and explanations for so me language points in the text.So me students were really knowledgeable in some subjects and the others would be aroused by their excellent performance.
3. Giv ing so me questions to the students to think about before performing a certain task.For example, asking the students to guess what would be talked about in the listening material before p laying the tape.In this way, the students could learn mo re effectively because of this thinking while learning.
Motivating and activating their interest in learning.
To do this, the teachers should try to understand the students and get to know what their interest is.
Having an informal d iscussion and personal communicat ion with the students are easy to know more about the students.
5. Short perfo rmances before every class, including dialogues, short plays, introducing some good poems and essays are all co lorfu l and interesting ways of starting class.These activities would ensure that every student took part in the activity in English in class.
6.So metimes when a question was raised in class, the students could be asked to give correct answers, not the teachers.In this way, the teachers would find out how well the students had learned.At the same time, the other students could also be activated by the students who were able to answer the questions.
7. Asking the students to retell the text they have learned.
They could also act out some of the passages.Or maybe the students could choose some other topics they were interested in, such as things happened in everyday life and some fairy tales.11.Encouraging the students, especially some top students to adjust the process and degree of difficu lty of their learning materials according to their own needs.
12. Making it clear to the students that reading is a good way in English learn ing.Encouraging them to do some extra reading.Novels, magazines and newspapers can all help them to meet the requirement on reading.
By doing this, both the teacher and the students would change their attitudes towards the roles they played.
The teacher was no longer the center of the classroom teaching.Instead, the teacher was the mediator, facilitator, organizer, counselor, source of in formation and evaluator.
The teacher also gave feedbacks to the students' learning This test was used to find out whether there was an y difference between the two classes at the beginning of the experiment.The findings were the following:
Findings
Fro m the above analysis, we can see that after being trained for learn ing strategies to get the ability to learn autonomously, the students in the EC made some progress in their English learning.The students could be trained to learn autonomously and the cultivation of LA benefited the students' English learning.So our alternative hypothesis is correct but the null hypothesis is wrong.
Fro m the above analysis, we can also say that at the end of the experiment, the students in the EC made some progress in their English learn ing.We can see this from their scores in the examinations.help learners to participate in commun ication and to build up their language system.This study has in some degree reached the goal of helping the learners to learn to learn, but has not been successful in motivating the learners to participating in communication.
III. CONCLUSION
My thesis is only a preliminary study of learner autonomy, wh ich is a comparatively new field of interest in applied linguistics.It attempts to promote autonomy in Chinese university students in the study of a foreign language.Fro m above experiment, we can see that after being trained for learn ing strategies to get the ability to learn autonomously, the students in the EC made some progress in their English learn ing.The students could be trained to learn autonomously and the cultivation of LA benefited the students' English learning.With the maturing of learner train ing program in China, students will take more responsibility for their learning and enter into learning more purposely and effectively.
In a word, we should have a full understanding of the superiority of learner autonomy, exp lore its potential as much as possible and make it serve as the catalyst in foreign language teaching and learning.
Questionnaire on Learner Autonomy
Dear students, I am doing some research on learner autonomy in modern languages learning and teaching.I would appreciate your cooperation with this questionnaire.The informat ion given here will not be disclosed to any third party.The follo wing questions are to know the students' related situations.Each of these items is followed by five alternatives on a 5-point Likert scale ranging from 1(strongly agree) to 5 (strongly disagree).Please answer them as honestly as possible.
Thank you for your cooperation.
strongly agree no view strongly disagree agree disagree 1.I would like to learn by small group discussions.
2. 2 ogy 2 . 2 . 1
Methodol The Subjects and the Design of the Experiment 161 students of mathemat ics school fro m Shandong University of Technology in two classes took part in the experiment .The experiment was carried out in one year in two classes.One is Experimental Class (EC) and another is Control Class (CC).A questionnaire including 27 items concerning autonomous leaning was handed out to the students for three times.The students had examinations each time after the questionnaire was handed out.The students in the EC was trained on learning strategies and motivated to be interested in English learning.So me related informat ion about autonomous learning was also introduced to the students, such as the necessity of making a plan and supervising the carrying out of the plan, the impo rtance of self-monitoring and self-assessment.The CC will just have regular classes.The results of the questionnaire and the marks of the three examinations will be collected and analyzed to find out whether the null hypothesis (H0) is correct or the alternative hypothesis (H1) is correct.
8 .
Asking the students to finish their homework by International Journal of English Literature and Social Sciences (IJELS) Vol-3, Issue-6, Nov -Dec, 2018 https://dx.doi.org/10.22161/ijels.3.6.55ISSN: 2456-7620 themselves.First, correct the possible mistakes in pairs or in small group, then check the mistakes by themselves again and hand in the homework to the teacher.By Learn ing fro m mistakes, it 's much quicker, much more convenient and more effective for the students to get the correct knowledge.
9 .
In order to lead the students to love English and be more interested in English, d ifferent kinds of co mpetitions could be held.Such as, reading competit ion, oral English competition, co mprehensive competition and so on.These were d ifferent fro m tests and the students would feel less anxious and more interested.10.Helping the students monitor and assess their progress and retrogress.Help ing them to find out their advantages and disadvantages.And most important, helping them to fully bring out their latent potentialit ies and affirm their achievements.
methods, strategies and achievements.The students were not passive receivers.They began to accept the idea of being the masters of their own learn ing and gradually took the responsibility of learn ing by themselves.They knew what they wanted to learn, what they didn't know.They made plans for their own learn ing, mon itored the carrying out of the plans, assessed and evaluated their learn ing.The learners used learning strategies taught by the teacher first purposely as a way to facilitate their learn ing and gradually the strategies became their potential ability in language learning.The learners changed from individual learners to co-operatives.
would like the students to participate in the class more.would like to voice my opinions actively in class.think it is necessary to make a study plan each term.like to work hard according to the study plan.like to preview the lesson before each class.like to review the lesson after each class.
Table . 2
: Descriptive Statistics of the Original Test for the EC and CC In the original test, the observed t-value is 1.90, which was significant because the observed significance level was 0.965 (p >0.05), and the observed t -value 1.90< the given t.This result shows that there was no significant difference between the two classes in the original test.This t-test was carried out to find out how well the two International Journal
of English Literature and Social Sciences (IJELS) Vol-3, Issue-6, Nov -Dec, 2018 https://dx.doi.org/10.22161/ijels.3.6.55 ISSN: 2456-7620 groups
did in the tests and whether there was any significant difference between the means of the two classes after treatment.The following are the results:Table.3:Descriptive Statistics of the Tests for the EC (Notes: The 100-point grading system in the three tests was changed into 150 -point for the convenience of statistical analysis )Table.4:DescriptiveStatistics of the Tests for the CCIndependent-Sample Test was used to find out after one-year-training on LA whether the students in EC had got International
Journal of English Literature and Social Sciences (IJELS) Vol-3, Issue-6, Nov -Dec, 2018 https://dx.doi.org/10.22161/ijels.3.6.55 ISSN: 2456-7620 the
ability to learn autonomously and could do better in their English learn ing.Fo r this reason, the author intended to compare the examination marks between EC and CC.Test 1, Test 2 and Test 3 marks were all collected.The results are shown in Table5, Table6 and Table 7. Fro m Deviation's distance is 7.19.Fro m the statistics we can see the students in EC did better.The Std. Deviations show that most of the students in EC got marks near to the means.But in Test 1, the Std.Deviation is 17.12, which is much larger than 11.75 in Test 2 and 11.14 in Test 3. When we look at the Std.Deviation in the CC, we can see they are all larger than those of the EC.So we can draw the conclusion that most of the students in EC have make progress in their English learning after the training LA.
International Journal of English Literature and Social Sciences (IJELS) Vol-3, Issue-6, Nov -Dec, 2018 https://dx.doi.org/10.22161/ijels.3.6.55 ISSN: 2456-7620
Table 6, when co mparing the EC with the CC, the author found that for most of the categories, the means are smaller in the EC than in the CC (The smaller the statistics are, the better the students employ the training strategies).But the means fo r Co mmun icative Strategies 1, 2 and 3 in the EC are all larger than those in the CC and the Std.Deviation are all smaller.Fro m this we can draw the conclusion that the students in the EC didn't do well in
|
v3-fos-license
|
2020-07-31T13:24:59.896Z
|
2020-07-30T00:00:00.000
|
220873056
|
{
"extfieldsofstudy": [
"Psychology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.neuroimage.2020.117213",
"pdf_hash": "2ef966fcd0fd42bd908afeca47c6757595263812",
"pdf_src": "Elsevier",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:820",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "2ef966fcd0fd42bd908afeca47c6757595263812",
"year": 2020
}
|
pes2o/s2orc
|
Probing the neural dynamics of mnemonic representations after the initial consolidation
Memories are not stored as static engrams, but as dynamic representations affected by processes occurring after initial encoding. Previous studies revealed changes in activity and mnemonic representations in visual processing areas, parietal lobe, and hippocampus underlying repeated retrieval and suppression. However, these neural changes are usually induced by memory modulation immediately after memory formation. Here, we investigated 27 healthy participants with a two-day functional Magnetic Resonance Imaging study design to probe how established memories are dynamically modulated by retrieval and suppression 24 hours after learning. Behaviorally, we demonstrated that established memories can still be strengthened by repeated retrieval. By contrast, repeated suppression had a modest negative effect, and suppression-induced forgetting was associated with individual suppression efficacy. Neurally, we demonstrated item-specific pattern reinstatements in visual processing areas, parietal lobe, and hippocampus. Then, we showed that repeated retrieval reduced activity amplitude in the ventral visual cortex and hippocampus, but enhanced the distinctiveness of activity patterns in the ventral visual cortex and parietal lobe. Critically, reduced activity was associated with enhanced representation of idiosyncratic memory traces in the ventral visual cortex and precuneus. In contrast, repeated memory suppression was associated with reduced lateral prefrontal activity, but relative intact mnemonic representations. Our results replicated most of the neural changes induced by memory retrieval and suppression immediately after learning and extended those findings to established memories after initial consolidation. Active retrieval seems to promote episode-unique mnemonic representations in the neocortex after initial encoding but also consolidation.
Introduction
Historically, memories were seen as more or less stable traces or engrams. After initial formation, memory traces are affected by consolidation leading to stabilization and weakening, leading to forgetting ( Ebbinghaus, 1885 ;Lashley, 1950 ;Müller and Pilzecker, 1900 ). However, contemporary research has provided ample evidence showing that memories continue to be dynamically adapted after initial encoding and, thus, can be modified by external factors throughout their existence. For instance, retrieval practice can reinforce memory traces ( Karpicke and Roediger, 2008 ), promote meaningful learning ( Karpicke and Blunt, 2011 ), and protect memory retrieval against acute stress ( Smith et al., 2016 ). In contrast, retrieval suppression can prevent unwanted memories to be retrieved ( Anderson and Green, 2001 ), and reduce their emotional impact ( Gagnepain et al., 2017 ).
Previous neuroimaging studies identified several neural changes that could explain the retrieval-mediated memory enhancement: after repeated retrieval, several studies reported decreased or increased univari-sion ( Anderson, 2004 ;Anderson and Hanslmayr, 2014 ). However, only a few studies investigated neural changes in activity and/or activity patterns across repeated suppression. Depue and colleagues showed the time-specific involvement of inferior frontal gyrus and medial frontal gyrus during the suppression of emotional memory ( Depue et al., 2007 ). Gagnepain and colleagues demonstrated the effect of suppression on visual memories may be achieved by targeted cortical inhibition of visualrelated activity and activity patterns ( Gagnepain et al., 2014 ).
Although these studies shed light upon neural changes underlying memory retrieval and suppression, all of them were based on memory modulation (i.e., retrieval and suppression) immediately after initial memory formation, except for one study that included repeated retrieval on two consecutive days ( Ferreira et al., 2019 ). How the modulation of memory traces after initial consolidation is reflected in the neural activity and mnemonic representation, as assessed by activation patterns during subsequent retrieval is currently not well understood. Studying the neural changes underlying the modulation of initially consolidated memories can provide complementary and critical understandings of the dynamic nature of human memory. Because newly acquired memories are usually more labile compared to consolidated ones ( Frankland and Bontempi, 2005 ) and mnemonic representations shift from the hippocampus to distributed neocortical regions following overnight sleep ( Takashima et al., 2009( Takashima et al., , 2006, the effectiveness of memory modulation could be decreased, and the underlying neural changes could be different. For example, a study showed that suppression of aversive memories after overnight consolidation is harder, and involved reconfigured neural pathways during suppression ( Liu et al., 2016 ). Also, modulation of consolidated memories may provide a clear focus on the changes of long-term memory representation, because previously reported immediate effects (i.e., changes in activity amplitude and activity patterns) can still be caused by short-term changes in related processes such as executive control or attention. Here, we used a two-day functional Magnetic Resonance Imaging (fMRI) design to characterize neural dynamics of initially consolidated memory. After overnight consolidation, memories were in one condition reinforced by repeated memory retrieval and in the other, weakened by repeated memory suppression. We analyzed the neuroimaging data from both the modulation and the subsequent memory retrieval phase to examine neural changes at the moment when specific memory was modulated and in the final memory test in which the aftereffects of the modulation can be measured.
Based on neural findings of memory reinstatement ( Chen et al., 2017 ;Kosslyn et al., 1997 ;Kuhl et al., 2010 ;Lee et al., 2019 ;O'Craven and Kanwisher, 2000 ;Polyn et al., 2005 ;Shohamy and Wagner, 2008 ;Wheeler et al., 2000 ;Wimber et al., 2015 ;Xue, 2018 ), we used both the levels of activity amplitude (i.e., univariate analysis) and activation patterns (i.e., multivariate pattern analysis) of visual area, parietal lobe, and hippocampus to characterize memory traces during memory retrieval and further examined the linear relationship between the two neural changes within the same regions. Furthermore, we adopted a novel design to disentangle perception-related neural activities associated with memory cues presented at the test and retrieval-related neural reactivation associated with reactivated mental images. One method to separate these two processes is to use two perceptual modalities (e.g., sounds as memory cues and pictures as information to be retrieved) ( Bosch et al., 2014 ). Here, we used highly similar visual memory cues across different memory associations. Thus, item-specific neural patterns (at least in visual areas) during retrieval more likely to be caused by retrieval-related memory reactivation instead of visual processing of memory cues.
To sum up, our primary goal is to reveal if two behavioral techniques (i.e., retrieval and suppression) can modulate initial consolidated associative memories, and if such modulation results in altered activity and/or activity patterns detected by fMRI. We first investigated the possibility that associative memories can still be modulated after 24 h. Behaviorally, we asked whether repeated retrieval and memory suppres-sion would oppositely strengthen or weaken original memory traces. Next, using fMRI, we examined whether retrieval and suppression would modify neural measures of memory reactivation (i.e., activity amplitude and activity pattern similarity) oppositely.
Participants
Thirty-two right-handed, healthy young participants aged 18-35 years who were recruited from the Radboud Research Participation System finished two sessions of our experiment. They all had correctedto-normal or normal vision and reported no history of psychiatric or neurological disease. All of them are native Dutch speakers. Two participants were excluded from further analyses due to memory performance at the chance level. Three additional participants were excluded because of excessive head motion during scanning. We used the motion outlier detection program within the FSL (i.e., FSLMotionOutliers) to detect timepoints with large motion (threshold = 0.9). There are at least 20 spikes detected in these excluded participants with the largest displacement ranging from 2.6 to 4.3, while participants included had less than ten spikes. Neuroimaging data of one additional participant was partly used: she was excluded from the analysis of the modulation phase (Think/No-Think paradigm) due to head motion (in total 53 spike, largest displacement = 5.7) only during this task, while his/her data during the other tasks were included in the analyses. Thus, data of 27 participants (16 females, age = 19-30, mean = 23.41, SD = 3.30) were included in the analyses of the final test phase, and data of 26 participants (15 females, age = 19-30, mean = 23.51, SD = 3.30) were included in the analyses of the modulation phase. All participants scored within normal levels when applying Dutch-versions of the Beck Depression Inventory (BDI) ( Roelofs et al., 2013 ) and the State-Trait Anxiety Inventory (STAI) ( van der Bij et al., 2003 ). Furthermore, because of the twosession design (24 h' interval), we used an adapted Dutch version of the Pittsburgh sleep quality index (PSQI) ( Buysse et al., 1989 ) to assess the quality of sleep between the two scanning sessions. Questions for last night's sleep were added to the original version. We compared participants' sleep quality/duration for the last night and the average across the previous four weeks. No participants reported abnormal sleep-related behaviors during the night between two fMRI sessions (i.e., more than two hours of differences in sleep time, time to go to bed, or time to wake up between the last night and the previous four weeks). The experiment was approved by, and conducted in accordance with requirements of the local ethics committee (Commissie Mensgebonden Onderzoek region Arnhem-Nijmegen, The Netherlands) and the declaration of Helsinki, including the requirement of written informed consent from each participant before the beginning of the experiment.
Locations and maps
We used 48 distinctive locations (e.g., buildings, bridges) drawn on two cartoon maps as memory cues. The maps are not corresponding to the layout of any real city in the world, and participants have never been exposed to the maps before the experiment. During the task, the whole map was presented with sequentially highlighting specific locations by colored frames as memory cues. By doing this, we kept visual processes during memory tasks largely consistent.
Pictures
Forty-eight pictures (24 neutral and 24 negative pictures) from the International Affective Picture System (IAPS) ( Lang et al., 1997 ) were used in this study, and these pictures can be categorized into one of four groups: animal (e.g., cat), human (e.g., reading girl), object (e.g., clock) or scene (e.g., train station). Category information was used for the following memory-based category judgment test.
All images were converted to the same size and resolution for the experiment.
Picture-location associations
Each picture was paired with one of the 48 map locations to form specific picture-location associations. We (W.L and J.V) carefully screened all the associations to prevent the explicit semantic relationship between picture and location (e.g., lighter at the-fire department). All 48 picturelocation associations were divided into three groups for different types of modulation (See Modulation Phase). For each map, 24 locations were paired 6 pictures from each category. One-third of associations (8 associations; 2 pictures from each category) on that map were retrieval associations (i.e. "think " associations), one-third of associations were suppression associations (i.e., "no-think " associations), and remaining one-third are control associations.
Overview of the design
This study is a two-session fMRI experiment, with the 24 h interval between two sessions ( Fig. 1 A ). Day1 session consists of the familiarization phase ( Fig. 1 B ), the study phase ( Fig. 1 C ), and the immediate typing test. The Day2 session consists of the second typing test, the modulation phase ( Fig. 1 D ), and the final memory test ( Fig. 1 E ). Among these phases, the familiarization, modulation, and the final memory test phase were performed in the scanner, while the study phase and two typing tests were performed in the behavioral lab. The trial structure and timing are depicted in Fig. S1 . Stimuli were presented while participants were scanned projecting on to a translucent screen (diameter = 598 mm; maximum projection size = 369 × 277 mm) mounted at the end of the scanner's bore and visible via a mirror mounted at the head coil and during behavioral sessions using a 24-inch LED monitor. During the MRI scanning, the distance between the visual surface mirror and the projection screen was around 85.5 cm. Moreover, to keep the visual presentation as consistent as possible, we set the resolution as at 1280 × 1024 for both set-ups.
Familiarization phase
To obtain the picture-specific brain responses to all 48 pictures, we instructed participants to perform the familiarization phase while being scanned ( Fig. 1 B). The second purpose of the task is to let participants become familiar with the pictures to be associated with locations later. Each picture (resolution = 400 × 400) was shown four times at the center of the screen with a visual angle of 7°for 3 s and was distributed over in total of four functional runs. The order of the presentation was pseudorandom and pre-generated by self-programmed Python code. The dependencies between the orders of different runs were minimized to prevent potential sequence-based memory encoding. To keep participants focused during the task, we instructed them to categorize the presented picture via the multiple-choice question with four options (animal, human, object, and scene). We used an exponential inter-trial intervals (ITI) model (mean = 2 s, minimum = 1 s, maximum = 4 s) to generate the ITIs between trials. Participants' responses were recorded by an MRIcompatible response box.
Study phase
Each picture-location association was presented twice in two separate runs ( Fig. 1 C). During each study trial, the entire map (resolution = 1024 × 768) was first presented for 2.5 s, then a BLUE frame was added to a layer on the top of the entire map to highlight one of the 48 locations, for 3 s, and finally, the picture and its associated location were presented side-by-side together for 6 s. We pre-generated a pseudorandom order of the trials to minimize the similarity between the orders in familiarization and the study phase.
Typing test phase
Immediately after the study phase, participants performed a typing test (day1) assessing picture-location association memory. Each location was presented again (4 s) in an order that differed from the study phase, and participants had maximally 60 s to describe the associated picture by typing its name/description on a standard keyboard. Twentyfour hours later (day2), participants performed the typing test again in the same behavioral lab. The procedure was identical to the immediate typing test, but with a different trial order.
Modulation phase
The modulation phase is the first task participants performed during the Day2 MRI session. We used the think/no-think (TNT) paradigm with trial-by-trial self-report measures to modulate initially consolidated memories ( Fig. 1 D). The same paradigm has been used in previous neuroimaging studies, and the self-report does not affect the underlying memory control process ( Anderson, 2004 ;Levy and Anderson, 2012 ). Forty-eight picture-location associations were divided into three conditions. One-third of the associations (16 associations) were assigned to the retrieval condition ( "Think "), one-third of the associations were assigned to the suppression condition ( "No-Think "), and the remaining one-third of the associations were assigned to the control condition. The assignment process was counterbalanced between participants. Therefore, at the group level, for each picture-location association, the possibility of belonging to one of the three modulation conditions is around 33.3%. Associations that belong to different conditions underwent different types of modulation during this phase. Locations which belong to the control condition were not presented during this phase. For a retrieval trial, the entire map was presented (visual angle = 18°) with one particular location, highlighted with a GREEN frame for 3 s, and participants were instructed to recall the associated picture quickly and actively and to keep it in mind until the map disappeared from the screen. For a suppression trial, one location was highlighted with a RED frame for 3 s, and participants were instructed that "when you see a location, highlighted with a RED frame, you should NOT think about the associated picture. Instead, you should try to keep an empty mind during this stage. It is a difficult task, and it is totally fine that sometimes you still think about the associated picture. But please do NOT close your eyes, focus on something outside the screen, or think about something else in your life. These strategies, although useful, could negatively affect the brain activity that we are interested in ……" After each retrieval or suppression trial, participants had up to a maximum of 3 s to report their experience during the cue presentation. Specifically, they answered a multiple-choice question with four response options ( Never, Sometimes, Often, and Always ) by pressing the button on the response box to indicate whether the associated picture entered their mind during that particular trial or not and the relative frequency.
The modulation phase consisted of five functional runs (64 trials per run). In each run, 32 locations (half retrieval trials, and half suppression trials) were presented twice. Therefore, each memory cue that did not belong to the control condition was presented ten times during the entire modulation phase. Again, we pre-generated the presentation orders to prevent similar order sequences across five modulation runs. Between each trial, fixation was presented for 1-4 s (mean = 2 s, exponential model) as ITI.
The final memory test phase
After the modulation phase, participants performed the final memory test within the scanner ( Fig. 1 E). All 48 locations (including both the retrieval/suppression associations as well as control associations) were highlighted one-by-one while showing the entire map again with a BLUE frame. During its presentation (4 s), participants were instructed to recall the associated picture covertly but as vividly as possible and keep the mental image in their mind. Critically, visual input during this phase was highly similar across trials because entire maps were The trial structure with exact timing was depicted in Fig. S1 . (B) During the familiarization phase, all of the pictures of the to-be-remembered associations were randomly presented four times for the familiarization and estimation of picture-specific activation patterns. To keep participants focused, on each trial, they were instructed to categorize the picture shown as an animal, human, location, or object. (C) Study phase. Participants were trained to associate memory cues with presented pictures. (D) Modulation phase. After 24 h, we used the Think/No-Think paradigm to modulate consolidated associative memories. Participants were instructed to actively retrieve associated pictures in mind ( "retrieval "), or suppress the tendency to recall them ( "suppression ") according to the colors of the frames (GREEN: retrieval; RED: suppression) around locations. (E) Final memory test phase. Participants performed the final memory test after the modulation. For each of the 48 location-picture associations, locations were presented again, and participants were instructed to report the memory confidence and categorize the picture that came to mind. always presented, just with different locations highlighted. Next, participants were asked to give the responses on two multiple-choice questions within 7 s (3.5 s for each question): (1) "how confident are you about the retrieval? " They responded with one of the four following response options: Cannot recall, low confidence, middle confidence, and high confidence. (2) "Please indicate the category of the picture you were recalling? " They also had four options to choose from (Animal, Human, Object, and Scene).
Familiarization phase
We did not calculate the accuracy of the category judgment during the familiarization phase because the categorization of a picture could be a rather subjective decision, and it is not relevant for the aim of this study. However, we used individual responses to control for subjective category categorization for the following memory performance evaluation. Specifically, if a participant consistently labeled a given picture across four repetitions as a different category compared to our predefined labels, we generated an individual-specific category label and used this category label for this picture to evaluate the responses in the final test. Otherwise, we used predefined labels to evaluate the responses.
Typing test
Participants' answers were evaluated by two native Dutch experimenters (S.M and J.V) independently. The general principle is that if the answer contains enough specific information (e.g., a little black cat), to allow the experimenter to identify the picture from the 48 pictures used, it was labeled as correct. In contrast, if the answer is not specific enough (e.g., a small animal), then it was labeled as incorrect. We used Cohen's kappa coefficient ( ) to measure inter-rater reliability. In general, lager than 0.81 suggests almost perfect reliability. If two accessors had different evaluations, the third accessor (W.L) determined the final result (i.e., correct or incorrect). After the immediate typing test, we only invited participants with at least 50% accuracy to the Day2 experiment. Three out of 35 recruited participants did not continue on Day2 due to low performance on Day1. For the typing test 24 h later, participants' responses were evaluated by the same experimenters again. Based on the participants' responses in this typing test, we identified picture-location associations that the given participant did not learn or already forgot. These associations were not considered in the following behavioral and neuroimaging analyses, because participants have no memory associations to be modulated. We calculated the average accuracies for the immediate typing test and typing test 24 h later and investigated the delay-related decline in memory performance using a paired t -test.
Modulation phase
Responses during the modulation phase were analyzed separately for retrieval trials and suppression trials. We first calculated the percentage of each option (never, sometimes, often, and always) chosen across 160 retrieval trials and 160 suppression trials for each participant. Next, we quantified the dynamic changes in task performance across repetitions (runs). Before the following analyses, we coded the original categorical variable using numbers (Never-1; Sometimes-2; Often-3; Always-4). For all the established picture-location associations, we calculated their average retrieval frequency rating (based on retrieval trials) and intrusion frequency rating (based on suppression trials) on each repetition. We used a repeated-measures ANOVA to model changes in retrieval and intrusion frequencies rating across repetitions to test if the repeated attempt to retrieve or suppress a memory trace would strengthen or weaken the associations, respectively. Additionally, to quantify individual differences in memory suppression efficiency ( Levy and Anderson, 2012 ), we calculated the intrusion slope score for each participant. Using all the intrusion rating for suppression trials, we used linear regression to calculate the slope of intrusion ratings across the ten repetitions for each participant. Participants with more negative slope scores are better at downregulating memory intrusions than those with less negative slope scores.
The final memory test phase
For each trial of the final memory test, we calculated both a subjective memory measure based on the confidence rating (1,2,3,4) and an objective memory measure based on the category judgment (correct/incorrect). Also, we recorded the reaction times (RT) for category judgments to estimate the speed of memory retrieval. To investigate the effect of types of modulation on the subjective, objective memory, and retrieval speed, we performed a repeated-measure ANOVA to detect within-participants' differences between RETRIEVAL ASSOCIA-TIONS, SUPPRESSION ASSOCIATIONS , and CONTROL ASSOCIATIONS . To assess individual differences in suppression-induced forgetting, we calculated the suppression score by subtracting the objective memory measure of retrieval suppression associations ( "no-think " items) from the control association. Participants showed more forgetting as the result of suppression had more negative suppression scores.
Combinatory analysis of modulation and final test phase
To replicate the relationship between memory suppression efficiency during the TNT task and suppression-induced forgetting during the final memory test reported before ( Levy and Anderson, 2012 ), we correlated suppression scores with intrusion slope scores across all participants. Notably, sample size ( N = 26) of this cross-participant correlational analysis is modest, but it is just a replication analysis of the previous study and the check for the memory suppression manipulation.
During the day1 session, anatomical T1 image was acquired firstly, followed by the field map sequence. Before the four EPI-based pattern localization runs, 8 min of resting-state data were acquired from each participant using the same sequence parameters. Day2 session began with the field map sequence. Thereafter, we acquired six EPI-based task-fMRI runs (five runs of the modulation phase and one run of the final test phase).
Preprocessing of neuroimaging data
All functional runs underwent the same preprocessing steps using FEAT (FMRI Expert Analysis Tool) Version 6.00, part of FSL (FMRIB's Software Library, www.fmrib.ox.ac.uk/fsl ) ( Jenkinson et al., 2012 ). In general, the pipeline was based on procedures suggested by Mumford and colleagues ( http://mumfordbrainstats.tumblr.com ) and the suggestions for Automatic Removal of Motion Artifacts (ICA-AROMA) ( Pruim et al., 2015 ). The first four volumes of each run were removed from the 4D sequences for scanner stabilization. The following preprocessing was applied; Motion correction using MCFLIRT ( Jenkinson et al., 2002 ); field inhomogeneities were corrected using B0 Unwarping in FEAT; non-brain removal using BET ( Smith, 2002 ); grand-mean intensity normalization of the entire 4D dataset by a single multiplicative factor. We used different spatial smoothing strategies based on the type of analysis. For data used in univariate analyses, we applied a 6 mm kernel. In contrast, for data used in multivariate pattern analyses, no spatial smoothing was performed to keep the voxel-wise pattern information. In addition to the default FSL motion correction algorithm, we used ICA-AROMA to further remove the motion-related spurious noise and chose the results from the "non-aggressive denoising " algorithm for the following analyses. Prior to time-series statistical analyses, highpass temporal filtering (Gaussian-weighted least-squares straight line fitting with sigma = 50.0 s) was applied.
Registration between all functional data, high-resolution structural data, and standard space was performed using the following steps. First, we used the Boundary Based Registration (BBR) ( Greve and Fischl, 2009 ) to register functional data to the participant's high-resolution structural image. Next, registration of high resolution structural to standard space was carried out using FLIRT ( Jenkinson et al., 2002 ;Jenkinson and Smith, 2001 ) and was then further refined using FNIRT nonlinear registration ( Andersson et al., 2007 ). Resulting parameters were used to align maps between native-space and standard space and back-projected region-of-interests into native space.
Anatomical region-of-interest (ROI) in fMRI analyses
Based on previous pattern reinstatement studies ( Jonker et hypothesized that ventral visual cortex (VVC), parietal lobe and hippocampus might carry picture-specific and category-specific information of the memory contents during retrieval. Therefore, we chose them as the ROIs in our fMRI analyses. All ROIs were first defined in the common space and back-projected into the participant's native space for within-participant analyses using parameters obtained from FSL during registration.
We defined anatomical VVC ROI based on the Automated Anatomical Labeling (AAL) human atlas, which is implemented in the WFU pickatlas software ( http://fmri.wfubmc.edu/software/PickAtlas ). The procedure was used before in a previous neural reactivation study conducted by Wimber and colleagues ( Wimber et al., 2015 ). Brain regions, including bilateral inferior occipital lobe, parahippocampal gyrus, fusiform gyrus, and lingual gyrus were extracted from the AAL atlas and combined to the VVC mask. The VVC mask was mainly used as the boundary to locate visual-related voxels in the following activity pattern analyses.
The ROIs of the hippocampus and parietal lobe (including angular gyrus (AG), supramarginal gyrus (SMG), and precuneus) were defined using a bilateral mask within the AAL provided by WFU pickatlas software. To yield better coverage of participants' anatomy, we extended the original mask by two voxels in each direction (i.e., dilated by a factor of 2 in the software).
Univariate generalized linear model (GLM) analyses of response amplitude 2.7.1. GLM analyses of neuroimaging data from the final test phase
To investigate how different modulations (retrieval/suppression) affect the subsequent univariate activation, we ran voxel-wise GLM analyses of the final test run. All time-series statistical analysis was carried out using FILM with local autocorrelation correction ( Woolrich et al., 2001 ) using FEAT. In total, six regressors were included in the model. We modeled the presentation of memory cues (locations) as three kinds of regressors (duration = 4 s) based on their modulation history (retrieval, suppression, or control). To account for the effect of unsuccessful memory retrieval, we separately modeled the location-picture associations that participants could not recall as a separate regressor. Lastly, button presses were modeled as two independent regressors (confidence and category judgment). All trials were convolved with the default hemodynamic response function (HRF) within the FSL.
We conducted two planned contrasts (retrieval vs. control and suppression vs. control) first at the native space and then aligned, resulting in statistical maps to MNI space using the parameters from the registration. These aligned maps were used for the group-level analyses and corrected for multiple comparisons using default cluster-level correction within FEAT (voxelwise Z > 3.1, cluster-level p < 0.05 FWER corrected). All of the contrasts were first conducted at the whole-brain level. Then, for the ROI analyses, we extracted beta values of these ROIs from the final test and compared them for the same contrasts (retrieval vs. control and suppression vs. control).
GLM analyses of neuroimaging data from the modulation phase
We ran the voxel-wise GLM analyses for each modulation run separately. In total, three regressors were included in the model. We modeled the presentation of the memory cues (location) as two kinds of regressors (duration = 3 s) according to their modulation instruction (retrieval or suppression). Button press was modeled as one independent regressor. Also, if applicable, location-picture associations that our participants could not recall were modeled as a regressor. For ROI analyses, we extracted beta values of these ROIs from whole-brain maps of each modulation run separately. We investigated repetition-related changes in beta values using the Repeated ANOVA for retrieval and suppression separately.
Multivariate pattern analyses of brain activation patterns 2.8.1. Activity pattern estimation
All preprocessed (unsmoothed) familiarization, modulation, and final test functional runs were modeled in separate GLMs in each participant's native space. For each trial within familiarization, we generated a separate regressor using the onset of picture presentation and 3 s as the duration. At the same time, we generated one regressor for all button presses of the category judgment to control for the motor-related brain activity. In total, 49 regressors were included in the model. This procedure led to a separate statistical map ( t -values) for each trial. Similarly, for each modulation and final test run, we generated a separate regressor using the onset of the presentation of location (memory cue) and 3 s as the duration. However, button presses were not included in the model because they may potentially carry ongoing memory-related information. Also, we got a separate t map for each modulation or test trial.
Searchlight analysis of picture-sensitive voxels
For each participant, brain data on the familiarization phase (i.e., pattern localization phase) was analyzed using the searchlight method ( Kriegeskorte et al., 2008( Kriegeskorte et al., , 2006 across the entire brain. More specifically, for each searchlight (centered at every voxel in the brain, a sphere with the radius of 5 mm) of each participant, we trained Support Vector Classification (SVC) classifier to differentiate the activity patterns elicited by each picture (or each category) and tested its predictive power using the leave-one-run-out cross-validation. SVC was implemented using the C-Support Vector Machine within the scikit-learn package ( https://scikit-learn.org/stable/ ) ( Pedregosa et al., 2011 ). The multiclass classification was handled according to a one-vs.-one scheme. We used default parameters of the function (regularization (C) = 1, radial basis function kernel with degree = 3). The same setting was applied for all classification described below. Specifically, for each trial, activity patterns within the searchlight were extracted. Since each picture was presented four times during four pattern localization runs, in total, we got four activity patterns within the searchlight for each picture. The within-participant classification was performed using the leave-one-runout cross-validation: activity patterns of one particular run were left out as the testing dataset, and the remaining three runs were used as the training dataset to train the SVC classifier. After all the trainingtesting procedures, our analyses resulted in one accuracy value to represent the overall predictive power of the activity patterns within this particular searchlight. The searchlight walked through the entire brain of each participant. After the searchlight procedure, each participant yielded a classification accuracy map and each voxel within the map stored the classification accuracy of that particular searchlight sphere. To allow the group inferences of the brain regions, we performed onesample t -tests on all of the classification accuracy maps and tested them against chance (chance level = 1/48, 2%). Since we would like to identify picture-sensitive voxels within the VVC, we overlapped the voxels identified by the searchlight ( p uncorrected < 0.001) with the anatomical VVC mask. Because choosing the p uncorrected < 0.001 as the threshold is arbitrary, we also used other thresholds ( p uncorrected < 0.05 and p uncorrected < 0.01) to define the significant voxels and further validated our results using different threshold-dependent masks.
We already used the within-participant searchlight analysis to localize stimuli-sensitive voxels in visual areas. We validated these identified VVC voxels in a cross-participant procedure. By doing this, we explored whether visual perception-related activation patterns of these voxels are shared across participants. Specifically, instead of performing the leaveone-run-out cross-validation within each participant, we used the threefold cross-validation within the entire sample. Firstly, t maps for each picture, and each run were transformed from native space to standard space to enable the cross-participant predictive model training and testing. Then, the identified voxels within the VVC were used as a mask to extract spatial patterns of activation. Finally, data from 2/3 partici-pants was used to train the SVC model, and the remaining 1/3 participants were used to assess the model. It is notable that cross-participant classification is just the confirmatory analysis of the searchlight classification and should not be regarded as independent analysis. The crossparticipant classification was also repeated in three clusters of VVC voxels under different thresholds ( p uncorrected < 0.05, p uncorrected < 0.01, and p uncorrected < 0.001).
Pattern reinstatement analysis
The VVC voxels identified by searchlight analysis and other anatomical-defined masks (including hippocampus, AG, SMG, and precuneus) were used as the mask in the cross-task classification of memory contents. For each trial's t -map estimated based on the final test run, we transformed it from native space to standard space. ROI-based activity patterns from both the pattern localization and final memory test phase were extracted using ROI masks. We performed cross-task threefold cross-validation to reveal the shared neural representation of the perception and retrieval of the same visual stimulus. Activity patterns estimated based on the pattern localization of the 2/3 participants (i.e., training sample) were used to train the SVC predictive model. We used the activity pattern during the final memory test evoked by the corresponding location (memory cure) of the remaining 1/3 participants (i.e., testing sample), together with the trained SVC model to predict the memory content on a trial-by-trial basis. Critically, the SVC model was trained solely on the localizer data (day1), and it was applied to the final memory test (day2) without further model fitting. Moreover, during the final memory test, visual input is highly similar across trials because we just highlighted each location on an identical map as the memory cue. Therefore, if a given classifier can significantly predict memory content, the classification is unlikely based on the neural responses to the memory cue only. For each ROI, we first calculated the average decoding accuracy for each participant across all trials. A common way to evaluate the significance of classification accuracies is to compare them with theoretical chance level (i.e., 1/number of categories). However, previous work has shown that this approach may overestimate the of classification significance ( Combrisson and Jerbi, 2015 ;Jamalabadi et al., 2016 ;Kowalczyk and Chapelle, 2005 ). We used an alternative method to control for this potential bias. For each decoding analysis, we generated an empirical null distribution of accuracies by repeating our decoding analyses with classifiers training on randomly shuffled labels ( N = 1000). Only accuracies whose values are larger than the 95th percentile of this null distribution were considered significant. Values that were larger than the maximum accuracy within this null distribution were assigned a p -value of < 0.001.
ROI-based trial-by-trial pattern similarity analysis on the modulation and final memory test data
Representation similarity analysis (RSA) ( Cohen et al., 2017 ) was used to calculate trial-by-trial pattern similarity within particular types of test trials (e.g., recall of associations belongs to the RETRIEVAL AS-SOCIATIONS ). Given the nature of the within-participant analysis and to improve the pattern similarity estimation, we based all calculations on activity patterns in the native space.
Firstly, we analyzed the multivariate activation patterns of the final test. The identified VVC voxels ( Fig. 2 A ) were transformed from standard space to native space and then used as a mask to extract 3D single-trial activity patterns to 2D vectors and z -scored for the latter correlational analysis. Activation patterns of the hippocampus ( Fig. 2 B ), angular gyrus ( Fig. 2 C ), supramarginal gyrus ( Fig. 2 D ), and precuneus ( Fig. 2 E ) were extracted in the same way. For each participant, after excluding all trials with incorrect memory-based category judgment, we divided the remaining trials into three conditions based on their modulation history (e.g., retrieval practice or retrieval suppression). Next, for activity patterns of trials within the same condition, we calculated neural pattern similarity using Pearson correlations between all possible pairs of trials within the condition ( Fig. 2 F ). The calculations led for each participant to three separate correlation matrices, one for each type of test trials for each participant. Finally, we used the mean value of all of the r -values located at the left-triangle of one participant-specific correlation matrix to represent the neural pattern similarity of that condition (the higher the r -value, the lower the pattern similarity). After repeating these steps for each participant separately, three kinds of pattern similarity values were generated for the statistical test. All mean r -values were Fisher-r -to-z transformed before the following statistical analyses. To investigate if different modulations have different effects on memory representation during the final test, we performed two planned withinparticipant comparisons: (1) RETRIEVAL ASSOCIATIONS vs. CONTROL ASSOCIATIONS; (2) SUPPRESSION ASSOCIATIONS vs. CONTROL ASSO-CIATIONS.
Next, we used the same approach to analyze the modulation data. For each presented location, activity patterns were extracted using the same mask from five modulation runs. Similarly, within-condition (retrieval or suppression) trial-by-trial pattern similarity was calculated for each condition and each run. The dynamic change was modeled using the condition by run interaction using the ANOVA analysis.
Statistical analysis
When comparing continuous variables (e.g., reaction time) between experimental conditions, we used repeated Analysis of variance (ANOVA) or paired t -test. A significant main effect in an ANOVA was followed by post hoc tests, in which multiple comparisons were corrected by the Holm-Bonferroni method. Notably, classification accuracies were not normally distributed. Therefore, we used non-parametric methods (i.e., Friedman Test ) to compare accuracies between experimental conditions. To evaluate the significances of classification accuracy, instead of comparing with theoretical chance levels, we compared real accuracies with an empirical null distribution of accuracies ( See Pattern reinstatement analysis above ). Accuracies were considered significant when they were at least higher than the 95th percentile of the corresponding null distribution. For ordinal responses (e.g., "never, " "sometimes "), the percentage of each option was calculated, and then percentages were compared across repetitions. To account for the number of comparisons that come with multiple ROIs ( n = 9), we applied False Discovery Rate correction based on the Benjamini-Hochberg procedure ( Thissen et al., 2002 ). For all statistical tests that involved multiple ROIs, FDR-corrected p values ( p FDR ) are reported along with raw p values ( p raw ) and effect sizes (e.g., Cohen's d , partial 2 ).
Data and code availability
Custom scripts used in this study, immediate data (i.e., preprocessed single-trial activation patterns used for reinstatement analyses) as well as raw data were uploaded to the Donders Repository ( https://data.donders.ru.nl/ ). The project was named as Tracking the involuntary retrieval of unwanted memory in the human brain with functional MRI in the Repository ( https://doi.org/10.34973/5afg-7r41 ).
Pre-scan memory performance immediately after study and 24 h later
During the immediate typing test (day1), 88.01% of the associated pictures were described correctly (SD = 10.87%; range from 52% to 100%). Twenty-four hours later, participants still recalled 82.15% of all associations in the second typing test (SD = 13.87%; range from 50% to 100%). Although we observed less accurate memory 24 h later (t(26) = 4.73, p < 0.001, Cohen's d = 0.912) ( Fig. S2 ), participants could still remember most location-picture associations well.
Fig. 2. Regions-of-interest (ROI) and rationale of the pattern similarity analysis. (A)
Functionally-defined voxels within the ventral visual cortex (VVC). We identified voxels whose activity patterns can be used to differentiate pictures that were processed during the familiarization phase and were reactivated during successful memory retrieval during the final test. (B) Anatomically-defined bilateral hippocampus ROI. (C) Anatomically-defined bilateral angular gyrus ROI. (D) Anatomically-defined bilateral supramarginal gyrus ROI. (E) Anatomically-defined bilateral precuneus ROI. (F) During the final test, "mental images " were retrieved based on highly similar memory cues (different locations within maps were cued). We derived activation patterns for each memory retrieval trials based on fMRI data, and then quantify the cross-item pattern similarity using Pearson's r . (G) Considering the highly similar perceptional processing, vivid "mental images " during memory retrieval should be reflected in lower activity pattern similarity.
Fig. 3. Behavioral performance during modulation and final memory test phase. (A)
Percentage of the trial-by-trial introspective report during the retrieval trials. For most of the retrieval trials, associated pictures were successfully recalled (1-P never : mean = 84.05%, SD = 11.79%). (B) With repeated retrieval attempts, associated pictures were more likely to "always " stay in mind ( P always : F [9234] = 5.3, p < 0.001, 2 = 0.02). (C) Percentage of the trial-by-trial introspective report during the suppression trials. During half of the suppression trials, participants successfully suppressed the tendency to recall the associated pictures ( P never : mean = 50.62%, SD = 25.35%).
For the analyses of suppression trials, we excluded all locationpicture associations which the participant could not describe correctly immediately before the modulation phase (i.e., Typing Test Day2). This approach controlled for individual differences in memory that could interfere with the analysis of memory suppression. On suppression trials, participants reported that they successfully suppressed the tendency to recall the associated pictures in about half of the trials ( P never : mean = 50.62%, SD = 25.35%, range from 4% to 92.5%; Fig. 3 C ). As shown before in the think/no-think literature before ( Levy and Anderson, 2012 ), the percentage of the four types of trial-by-trial intrusion reports changed differently from the first to the tenth repetition (Choice × Repetition: F [27,702] = 3.4, p < 0.001, 2 = 0.01; Fig. 3 D ). Specifically, the percentage of reporting "never " increased (F [9234] = 5.4, p < 0.001, 2 = 0.04), while the percentage of reporting "sometimes " (F [9234] = 2.5, p = 0.008, 2 = 0.02) decreased over repetitions. These results together suggest that participants were successful at retrieving or suppressing memory traces according to task instructions.
Memory performance during the final memory test
During the final test, participants selected, on average, the correct category (chance level = 1/4) for the associated picture on 91.82% (SD = 6.05%; range from 70.83% to 100%) of the successfully recalled associations of the typing test on day2 (mean = 39.43). We then examined how repeated retrieval and suppression affected memory perfor-mance. First, we compared recall accuracies between three kinds of associations (i.e., RETRIEVAL ASSOCIATIONS, SUPPRESSION ASSOCI-ATIONS, and CONTROL ASSOCIATIONS ). Analysis of objective recall accuracy after modulation showed no significant main effect of modulation (F [2,26] = 0.524, p = 0.595, 2 = 0.013; Fig. 3 E ). Due to the lack of suppression-induced forgetting effect (lower accuracy for SUP-PRESSION ASSOCIATIONS compared to CONTROL ASSOCIATIONS ) at the group level, we performed a correlational analysis to associate performance during memory suppression and the final memory test. We found that participants who were more effective in suppressing intrusions (higher intrusion slope score ) during the modulation phase were the ones who showed larger suppression-induced forgetting effects ( r = 0.411, p = 0.03; Fig. 3 F ), suggesting that successful retrieval suppression was subsequently associated with suppression-induced forgetting. This correlation was also reported before in the think/no-think literature ( Levy and Anderson, 2012 ). Additionally, we investigated the effect of modulation on memory confidence and found a significant main effect (F [2,26] = 5.928, p = 0.005, 2 = 0.07; Fig. 3 G). Post-hoc analyses revealed higher recall confidence for RETRIEVAL ASSOCIATIONS compared to the CONTROL ASSOCIATIONS (t(26) = 3.35, p holm = 0.007, Cohen's d = 0.64) and a trend towards higher confidence compared to SUPPRESSION ASSOCIATIONS that just failed to reach our threshold for statistical significance (t(26) = 2.172, p holm = 0.07, Cohen's d = 0.41). Finally, we asked if modulation affected retrieval speed indexed by the RT during the final test. Even though we did not find a significant main effect of modulation (F [2,26] = 2.905, p = 0.06, 2 = 0.03; Fig. 3 H), recall of RETRIEVAL ASSOCIATIONS was faster compared to the recall of CONTROL ASSOCIATIONS (t(26) = − 2.486, p = 0.02, Cohen's d = − 0.47).
Measuring the pattern reinstatement of individual memory during retrieval
The Support Vector Classification (SVC)-based searchlight analysis revealed brain regions including the lateral occipital cortex, fusiform gyrus, lingual gyrus, and calcarine cortex, which showed picture-specific
Fig. 4. Identify picture-sensitive voxels and measure pattern reinstatement in the ventral visual cortex. (A)
Using the searchlight method, we localized picture-sensitive voxels in brain regions included lateral occipital cortex, fusiform gyrus, lingual gyrus, calcarine cortex, postcentral and precentral gyrus, supplementary motor area, and small clusters within the medial and inferior prefrontal cortex. These voxels showed picture-specific activation patterns during the perception (uncorrected p voxel < 0.001). (B) We restricted our following pattern analyses into these voxels within the ventral visual cortex (VVC) boundary by overlapping the searchlight accuracy map and anatomical-defined VVC. (C) fMRI activation patterns of these voxels during pattern localization were extracted to train a classifier. The activity patterns of these voxels during the final test were further extracted and used as inputs for the classifier for different pictures. (D) The classifier was first validated in a cross-participant, within-task procedure. We demonstrated that picture-sensitive voxels could enable the cross-participant picture classification during perception (mean accuracy = 61.88%, SD = 17.71%, p < 0.001). (E) The same classifier, without further model training, was used for the decoding of memory contents based on activity patterns during retrieval. Results showed that the classifier could decode the memory contents with the accuracy higher than shuffled decoding models (mean accuracy = 43.13%, SD = 16.52%, p < 0.001). (F) We observed the significant lower classification accuracies for cross-task classification compared to the within-task classification (t(26) = − 3.97, p < 0.001). The red line represents the 95th percentile of the accuracy within 1000 randomly label-shuffled null distribution. activation patterns during the perception (uncorrected p voxel < 0.001, Fig. 4 A ). We restricted our following activation pattern analyses to these voxels within the anatomical VVC boundary ( Fig. 4 B ). Next, we confirmed that activation patterns of these voxels could be used for crossparticipant classification of the visual stimulus during perception. We trained the SVC based on activation patterns of two-thirds of all participants and tested the model using the remaining one-third. Results from the three-fold cross-validation confirmed these VVC voxels do enable cross-participant picture classification (mean accuracy = 61.88%, SD = 17.71%, shuffled accuracy max = 3.2%, p < 0.001, Fig. 4 D ).
The preceding results established that activity patterns of voxels within the VVC carry picture-specific information during perception, we next examined if we can detect the pattern reinstatements of memory traces within the same area during the final memory test. We trained the SVC model based on the neuroimaging data from the pattern localization phase to classify the trial-by-trial memory content in the final test ( Fig. 4 C ). Results showed that the classifiers could decode memory content based on activity patterns during the final test with an accuracy (mean accuracy = 43.13%, SD = 16.52%, shuffled accuracy max = 3.3%, p < 0.001, Fig. 4 E ), although the accuracy is significantly lower than the within-task classification of the perceived visual stimulus (t(26) = − 3.97, p < 0.001, Cohen's d = − 0.76, Fig. 4 F ).
We ran two control analyses to test the robustness of observed pattern reinstatement in the VVC during retrieval. We first examined the effect of arbitrary thresholds used in cluster formation on the subsequent classification of memory contents. Specifically, we used the two additional thresholds (uncorrected p voxel = 0.01 and 0.05) to identify picture-sensitive voxels during the whole-brain searchlight analysis and confirmed that the classifications could also be performed based on picture-sensitive voxels under other thresholds (0.01 and 0.05) ( Fig. S3 ). In addition, beyond picture-specific classifications, we investigated the possibility of category-specific classifications based on brain activity patterns. All of the pictures to be associated can be categorized as one of the four following groups: animal, human, object, or location. Similarly, we localized category-sensitive voxels within the VVC ( Fig. S4D ) and confirmed that these voxels also carry category-specific information during perception (mean accuracy = 69.13%, SD = 9.67%, shuffled accuracy max = 29.6%, p < 0.001, Fig. S4E ). Also, activity patterns of these category-sensitive voxels during memory retrieval could enable crossparticipant, cross-task classification of the category during final memory test (mean accuracy = 44.29%, SD = 8.9%, shuffled accuracy max = 30.4%, p < 0.001, Fig. S4E ).
Based on the same decoding pipeline, we performed a control pattern reinstatement analysis on activation patterns within the premotor cortex ( Fig. S6A ), which, according to the reinstatement model, is not expected to represent memory content during retrieval ( details see Supplemental Texts; Section 4 ). Even for the category-based decoding, which requires less information than the item-based decoding, activation patterns of this area during retrieval could not be used to classify memory contents (Fig. S6B).
Without considering the modulation of each association (i.e., retrieval, suppression, or control), we demonstrated pattern reinstatement of individual memories during retrieval after 24 h delay. Based on the differences in RT and confidence, we tested whether different modulations have different effects on the evidence (i.e., decoding accuracy or decision value ( Linde-Domingo et al., 2019 )) of memory reactivation. For example, if repeated retrieval increased the reactivation evidence, while suppression decreased the evidence). We performed these analyses based on classifier training in both cross-participant and withinparticipant manner. These analyses yielded no significant results between different modulations in all ROIs investigated ( Details in Supplemental Materials; Table S1-S4).
In sum, we identified picture-specific voxels within the VVC and demonstrated the pattern reinstatements of individual memory traces in these voxels during retrieval. The same pattern reinstatements were detected in anatomical-defined hippocampus, AG, SMG, and precuneus. These results are the foundations of our following multivariate pattern analysis: the pattern reinstatements 24 h after initial learning suggested that activity patterns of these regions during retrieval carry mnemonic representations.
Next, we confirmed that the observed activity reduction is related to a linear decrease in activity with repeated retrieval using the data from the modulation phase. Specifically, we extracted the beta coefficient from these clusters for each run of the modulation phase and tested for the change in activity amplitude across runs. We found reduced VVC activity over repeated retrieval attempts (F [4, 25] = 5.95, p < 0.001, 2 = 0.174). Similarly, for the bilateral hippocampus, we observed a trend toward a gradual decrease of hippocampal signal across repetitions (left hippocampus: F [4, 25] = 2.39, p = 0.056, 2 = 0.087 ; right hippocampus: F [4, 25] = 2.22, p = 0.072, 2 = 0.082). Even though we found the retrieval-related activity reduction in right AG and precuneus during the final test, we did not find the corresponding gradual decrease during modulation (right AG: F [4, 25] = 0.734, p = 0.571, 2 = 0.02; right precuneus: F [4, 25] = 1.88, p = 0.12, 2 = 0.05).
Repeated retrieval dynamically enhances the distinctiveness of activity patterns in the visual cortex, but not hippocampus: focusing on the identified VVC voxels, parietal lobe and hippocampus, we calculated the trial-by-trial activity pattern similarity for RETRIEVAL ASSOCIATIONS and CONTROL ASSOCIATIONS separately. Results show that retrievalrelated activity patterns for RETRIEVAL ASSOCIATIONS have decreased similarity in VVC compared to CONTROL ASSOCIATIONS (t(26) = − 2.3, p raw = 0.029, p FDR = 0.08, Cohen's d = − 0.44; Fig. 4 C ). To test the robustness of decreased pattern similarity for RETRIEVAL ASSOCIATIONS in the VVC , we performed the same contrast based on (1) all associations instead of only remembered association, the VVC areas defined by (2) different thresholds and (3) category-sensitive voxels instead of picture-sensitive voxels. All control analyses yield the same result as the reported main analysis ( Figs. S8-S10 ). However, we did not observe a similar effect in the hippocampus ( Fig. 6 H ), but failed to reach significance.
Our ROI analyses already found reduced activity amplitude, but more distinct activity patterns in VVC, right AG, and precuneus. Then we performed the correlational analysis to explore the relationship between changes in activity amplitude and changes in pattern similarity across participants. We found that participants who showed a larger reduction in VVC's activity amplitude were more likely to show a larger decrease in VVC pattern similarity ( r = 0.610, p < 0.001; Fig. 5 C ). This correlation is also significant for right precuneus ( r = 0.427, p = 0.026), but not for right AG ( r = − 0.051, p = 0.799).
To characterize the dynamic modulation of pattern similarity in the VVC, we further applied the same variability analysis to each run of the modulation phase and analyzed these pattern similarity values using a 2 × 5 ANOVA ( modulation; repetition ). We saw a significant main effect of run , reflecting that pattern similarity of the VVC decreased with repetitions (F [4, 100] = 10.55, p < 0.001, 2 = 0.028). We also saw a main effect of modulation , reflecting that pattern similarity of the RETRIEVAL ASSOCIATIONS is consistently lower than the similarity of SUPPRES-SION ASSOCIATIONS (F [1, 25] = 23.77, p < 0.001, 2 = 0.028). The interaction between modulation and runs just failed to be significant (F [4, 100] = 2.427, p = 0.053, 2 = 0.001; Fig. 5 D ). This pattern of results suggests that decreased pattern similarity is not only the result of repetition: even though memory cues of SUPPRESSION ASSOCIATIONS have also been presented ten times during the modulation, repeated retrieval more effectively enhanced pattern distinctiveness compared to suppression. We applied the same dynamic modulation analysis to the ROIs, which demonstrated lower cross-item pattern similarity for RETRIEVAL ASSOCIATIONS (i.e., right AG, left SMG, and bilateral precuneus) during the final memory test phase, but we found no evidence for an interaction between modulation and runs (right AG:
Retrieval suppression was associated with reduced lateral prefrontal activity
Weaker lateral prefrontal cortex (LPFC) activation as the result of retrieval suppression: the contrast between retrieval of SUPPRESSION ASSOCIATIONS and CONTROL ASSOCIATIONS during the final test revealed decreased activation oin one cluster in the left LPFC ( x = − 52, y = 38, z = 16, Z peak = 4.09, size = 1320 mm 3 ; Fig. 7 A ). We did not find any significant effect of retrieval suppression on hippocampal activity amplitude in the whole-brain or the ROI analy- To characterize dynamical activity changes in the left LPFC, we extracted beta values from the cluster for each modulation run and did not find decreased activity from the first to the fifth run during suppres-sion (F [4, 25] = 2.03, p = 0.09, 2 = 0.056; Fig. 7 B ). Subsequently, we performed an exploratory analysis to restrict analysis within the first four runs and found a gradually decreased activity in the left lPFC (F [3, 25] = 2.98, p = 0.036, 2 = 0.078).
Intact neural representations after memory suppression: next, we examined if retrieval suppression modulated activity patterns in the VVC, hippocampus, or parietal lobe. Pattern similarity analysis revealed no significant difference between SUPPRESSION ASSOCIATIONS and effect of memory suppression on final memory performance, but the strong correlation between the intrusion slope and suppression-induced forgetting, we further investigated suppression-induced changes in pattern similarity among participants who showed strong negative intrusion slopes and (by correlation) more suppression-induced forgetting. More specifically, we used the median split method to divide the data of all participants into two groups (strong suppression group vs. weak suppression group) according to their intrusion slope value and compared changes in pattern similarity between groups. Our results suggested that both groups did not demonstrate differential suppressioninduced changes in changes in pattern similarity for all ROIs investigated ( Table S6 ).
Discussion
Active memory retrieval is known to be a powerful memory enhancer, while memory suppression tends to prevent unwanted memories from further retrieval. Previous neuroimaging investigations of the neural effect of repeated retrieval and suppression revealed corresponding neural changes in both univariate activity analysis and multivariate activity patterns analysis. Building on these findings, we tested whether similar neural changes can be detected when modulation is delayed by 24 h (i.e., newly acquired memories have undergone the initial consolidation). Also, because we collected fMRI data from both the modulation phase and the final memory test, this design allowed us to perform dynamic analysis on whether the neural changes seen in the final memory test are accompanied by gradual changes during the modulation phase. Similar to previous literature ( Ferreira et al., 2019 ), our results demonstrated that repeated retrieval of consolidated memories was associated with enhanced episode-unique mnemonic representations in the parietal lobe. Critically, our dynamic analysis provided converging evidence for the adaption of stronger mnemonic representations in visual processing areas, which were involved in the initial perception. Our results suggested that repeated retrieval of newly acquired memory and initially consolidated memory may be associated with similar neural changes.
Repeated retrieval strengthened consolidated memories. Behaviorally, our results demonstrate that, after an initial delay of 24 h, repeated retrieval strengthened memories further, indexed by higher recall confidence and shorter reaction times. The beneficial effect of retrieval practice on the subsequent retrieval is well established ( Karpicke and Blunt, 2011 ;Karpicke and Roediger, 2008 ;Karpicke and Roediger III, 2007 ;Smith et al., 2016 ). In our study, memory accuracy was already near the ceiling level, and thus we did not find higher recall accuracy of RETRIEVAL ASSOCIATIONS compared to CONTROL ASSOCIATIONS . Corroborating the behavioral effect during the final memory test, we also found that repeated retrieval of certain memories increased their tendency to remain stable in mind during the modulation phase.
Repeated retrieval is associated with subsequent decreasing activity amplitude. Our whole-brain univariate analysis revealed a set of brain regions, including frontal, parietal (mainly precuneus), and ventral visual areas that showed decreasing activity amplitude with repeated retrieval. Activity changes in frontal and parietal areas have been reported frequently in the literature of retrieval-mediated learning/forgetting, but the directions of the reported changes are mixed. Some of the reports have found similar univariate decreases in frontal or parietal areas ( Kuhl et al., 2010 ;Wimber et al., 2011Wimber et al., , 2008, but others reported activity increases in these areas ( Himmer et al., 2019 ;Nelson et al., 2013 ;van den Broek et al., 2016 ;Wirebring et al., 2015 ). In addition to the whole-brain analysis, our ROI analysis further showed decreased activity in the right angular gyrus. In sum, our study mainly found decreased activity in frontal and parietal areas after repeated retrieval of initially consolidated memories. Moreover, decreased activity in ventral visual areas is a novel finding. Previous studies usually used words as materials to be remembered ( Nelson et al., 2013 ;Wimber et al., 2011Wimber et al., , 2008Wirebring et al., 2015 ), while we used pictures. One other study also used pictures and the TNT paradigm but did not reveal reliable activity changes for retrieved pictures compared to the controlled pictures ( Gagnepain et al., 2014 ). To test the fast-consolidation hypothesis of retrieval-mediated learning ( Antony et al., 2017 ), we further examined changes in hippocampal activity during modulation and final test. Similar to a recent report of slow hippocampal disengagement during repeated retrieval ( Ferreira et al., 2019 ), we found dynamically decreasing hippocampal activity across repeated retrieval for initially consolidated memories. Our results, together with findings of Ferreira and colleagues, are consistent with decreasing retrieval-related hippocampal activity over the course of consolidation ( Takashima et al., 2009( Takashima et al., , 2006. Repeated retrieval enhanced episodic-unique cortical representations. Our multivariate pattern analysis showed that compared to controls, repeated retrieval led to less similar activity patterns in ventral visual areas, and almost all parietal ROIs, including AG, SMG, and precuneus. Using a conceptually similar method, Ferreira and colleagues also reported increased item-unique activity patterns in parietal regions across two days ( Ferreira et al., 2019 ). Ye and colleagues further showed how retrieval practice led to memory updating by differentiating activity patterns in the mPFC ( Ye et al., 2020 ). These results together may suggest the interaction between the effect of repeated retrieval and episodicunique neural representations during fast formation of cortical memories. Similar representational dissimilarity analysis has been used to analyze patterns of activity during retrieval suppression ( Gagnepain et al., 2014 ). However, after the modulation, participants of this study only performed a visual perception task, which measures repetition priming instead of a direct measure of memory. Therefore, it is impossible to directly compare the trial-by-trial pattern similarity during retrieval between RETRIEVAL and CONTROL associations.
One novel aspect of our findings is that after repeated retrieval, we found the decreased retrieval-related activity amplitude correlated with enhanced distinctiveness of activity patterns in ventral visual areas and precuneus. Our dynamic analysis of these two neural measures during modulation and subsequent memory test confirmed further that the neural changes observed during the later test are associated with dynamic adaptation of activity amplitude and pattern similarity during modulation in the ventral visual areas. However, this is not true for the precuneus. In general, this pattern of results is in line with our knowledge about how preexisting associative memory shapes brain responses. Prior information about upcoming stimuli is often associated with overall lower activity amplitude, a phenomenon termed "expectation suppression " ( Summerfield et al., 2008 ;Summerfield and de Lange, 2014 ). At the same time, underlying activity patterns carry more visual information ( de Lange et al., 2018 ;Kok et al., 2012 ). By correlating these two neural changes in the same regions, our study reported a similar phenomenon during episodic memory retrieval. This finding suggests that the inverse relationship between overall activity amplitude and patternbased information representation holds not only for low-level perceptual memory but also for episodic memory retrieval. Moreover, the correlation between the activity amplitude and pattern similarity may also be understood from a "noise correlations " perspective in information processing ( Averbeck et al., 2006 ;Cohen and Kohn, 2011 ). A recent simultaneous EEG-fMRI study found that decreased alpha/beta power, as a potential marker of the reduced noise correlations, was associated with increased stimulus-specific activation patterns measured by representation similarity analysis ( Griffiths et al., 2019 ). We speculate that retrieval practice might not directly enhance memory representations, but affect them by reducing their noise correlations. During retrieval of strengthened memories, redundant ongoing neuronal activity (i.e., noise) may be suppressed. Therefore, we observed lower overall activity amplitude and, at the same time, reduced "noise correlation , " boosting the signal-to-noise ratio. Thus, stimulus-specific neural patterns are reinstated with more specificity, demonstrating lower pattern similarity across distinct trials.
Retrieval suppression inhibited lateral prefrontal activity during subsequent retrieval. For SUPPRESSION ASSOCIATIONS , we observed lower LPFC activity amplitude, but relatively intact activity patterns in visual areas, parietal lobe, and hippocampus during subsequent retrieval. Active memory suppression during retrieval is proposed to be partially supported by inhibitory control mechanisms mediated by the lateral prefrontal cortex ( Anderson and Hanslmayr, 2014 ;Guo et al., 2018 ). During retrieval suppression, LPFC is typically activated ( Anderson, 2004 ;Guo et al., 2018 ;Levy and Anderson, 2012 ), but it showed gradually decreasing activity amplitudes from early suppression attempts to the later trials of suppression ( Depue et al., 2007 ). Consistent with this pattern, we found a similar decrease in LPFC activity amplitude across suppression attempts during the modulation phase and lower activity amplitude during the subsequent retrieval. Together with the trial-bytrial intrusion frequency rating during modulation, this activity decrease across suppression attempts may suggest less inhibitory control demands when suppressing increasingly weakened memories. The observed reduction in LPFC activity during the subsequent retrieval might be a long-lasting effect of this reduced activity amplitude and suggests that modulated cognitive control allocation hampers retrieval. Another interesting observation is that we found weak evidence for suppressioninduced changes in pattern reinstatement during the final memory test. Even though the involvement of the LPFC-hippocampal circuit in suppression has been examined ( Anderson and Hanslmayr, 2014 ;Guo et al., 2018 ), the changes in neural representations of individual memory trace underlying suppression-induced forgetting remain less well studied. One study measured the effect of retrieval suppression on newly acquired visual memories via cortical inhibition ( Gagnepain et al., 2014 ) and this study found that retrieval suppression reduced activity amplitude in the fusiform gyrus compared to retrieval, but the pattern was opposite to the one found in the lateral occipital complex. Effective connectivity and pattern similarity analysis suggested that top-down control mediated by the middle frontal gyrus suppressed perceptual memory traces in the visual cortex. Our study did find the comparable suppressioninduced changes in activity amplitude but not mnemonic representations in the visual cortex. This may relate to the modest behavioral effects or less labile consolidated memory traces. Future studies with stronger suppression-induced forgetting effects can directly compare activity patterns between still-remembered associations and forgotten associations.
Limitations. Our study has a few limiting aspects that should be mentioned. Firstly, given that we only found a modest effect of suppressioninduced forgetting, it is difficult to interpret repeated suppressionrelated fMRI results. There are at least two possible reasons for this modest effect: first, due to extensive training during encoding and/or the nature of our picture-location tasks, recall accuracy for all conditions was close to the ceiling level. Second, the suppression-induced forgetting effect is much smaller when memories have been consolidated ( Liu et al., 2016 ). Thus, in line with previous studies, suppression-induced forgetting may not have emerged as the group level ( Gagnepain et al., 2017 ;Liu et al., 2016 ). Nevertheless, we replicated two findings, confirming that our memory suppression modulation was still effective. First, when unwanted memories were suppressed repeatedly, their tendency to intrude was reduced during the TNT phase ( Benoit et al., 2015 ;Gagnepain et al., 2017 ;Hellerstedt et al., 2016 ;Levy and Anderson, 2012 ;van Schie and Anderson, 2017 ). Second, the extent of this reduction (i.e., intrusion slope) correlated with subsequent suppressioninduced forgetting effect across participants ( Levy and Anderson, 2012 ). Given this correlation, we further compared suppression-induced neural changes between a strong and a weak suppression group, but still did not find an effect of suppression on mnemonic representations. These results may suggest that even for participants who showed suppressioninduced forgetting, the underlying mnemonic representations remain intact. A second potential limitation of our study is that we only found the effect of repeated retrieval on trial-by-trial pattern similarity instead of the more direct measure of memory reactivation, such as decoding accuracy or decision value ( Linde-Domingo et al., 2019 ). Therefore, the relationship between the reduction in univariate activity and enhanced multivariate representation can be interpreted from two different perspectives. On the one hand, it can be explained as the enhanced unique cortical memory representations. On the other hand, the reduction in across-item pattern similarity could be due to factors, for example, the reduced memory unrelated "noise correlations ". It is noticeable that our pattern reinstatement analysis demonstrated that, based on activity patterns in our ROIs, the individual picture can be decoded when the classifier was trained on the localizer data (day1) before testing it on the final memory test (day2). This reinstatement laid the groundwork for our pattern similarity calculation because there is evidence that these activity patterns used in the variability analysis carry item-specific mnemonic information during retrieval. However, when we divided all associations into three groups (i.e., retrieval, suppression, and control), we did not find the evidence for the idea that retrieval or suppression can separately modulate decoding accuracies or d values, but that all three kinds of associations showed comparable decodability during retrieval. This result ruled out the possibility that could fully explain the differences in our pattern similarity measure. These results may suggest that decoding accuracies or d values used here were not sensitive enough after initial consolidation, because perceptual information might already be based on the transformed representation ( Xiao et al., 2017 ). In addition, decoding outcomes and pattern similarity may associate with different aspects of mnemonic representations. Sensitive decoding depends on the reinstatement of the original representation related to the perceptual input, while pattern similarity reflects episode-unique activity patterns across retrieved "mental images ". Enhanced episode-unique representations after repeated retrieval, particularly in the visual processing areas, support the following notion. Given that our memory cues (i.e., highlighted locations) are visually very similar, the changes in pattern similarity in visual areas are more likely to be the result of enhanced mnemonic reinstatements instead of variability induced by visual features of memory cues. Thirdly, when using a conservative correction for the number of ROIs tested, contrasts of parietal areas only showed only considerable trends toward significance, although the individual test is significant. We believe that trends in parietal areas could be caused by the definitions of our ROIs are based on the coarse atlas at the group level. That is to say, for each participant, maybe only part of the parietal ROIs is involved in the retrieval processing.
Conclusion. Taken together, our study probed the effects of repeated retrieval and suppression on initially consolidated memories. We showed that repeated retrieval dynamically reduces the activity amplitude in the visual cortex and hippocampus while enhances the distinctiveness of activity patterns in the visual cortex and parietal lobe. Moreover, reduction in activity amplitude correlated with the enhancement of episode-unique mnemonic representations in visual areas and precuneus. By contrast, repeated suppression, as done here, was associated with the reduced lateral prefrontal activity, but intact mnemonic representations. These findings extended our understanding of neural changes underlying memory modulations from newly acquired memories to initially consolidated memories and suggested that active retrieval may strengthen episode-unique information neocortically after initial encoding and also consolidation.
Declaration of Competing Interest
The authors declare no competing interests.
|
v3-fos-license
|
2018-12-07T12:01:22.955Z
|
2016-06-09T00:00:00.000
|
55764578
|
{
"extfieldsofstudy": [
"Geography"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLI-B3/359/2016/isprs-archives-XLI-B3-359-2016.pdf",
"pdf_hash": "5412eb9fa324408bf7485cdf3626a079edb651e1",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:821",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "5412eb9fa324408bf7485cdf3626a079edb651e1",
"year": 2016
}
|
pes2o/s2orc
|
POINT CLOUD REFINEMENT WITH A TARGET-FREE INTRINSIC CALIBRATION OF A MOBILE MULTI-BEAM LIDAR SYSTEM
LIDAR sensors are widely used in mobile mapping systems. The mobile mapping platforms allow to have fast acquisition in cities for example, which would take much longer with static mapping systems. The LIDAR sensors provide reliable and precise 3D information, which can be used in various applications: mapping of the environment; localization of objects; detection of changes. Also, with the recent developments, multi-beam LIDAR sensors have appeared, and are able to provide a high amount of data with a high level of detail. A mono-beam LIDAR sensor mounted on a mobile platform will have an extrinsic calibration to be done, so the data acquired and registered in the sensor reference frame can be represented in the body reference frame, modeling the mobile system. For a multibeam LIDAR sensor, we can separate its calibration into two distinct parts: on one hand, we have an extrinsic calibration, in common with mono-beam LIDAR sensors, which gives the transformation between the sensor cartesian reference frame and the body reference frame. On the other hand, there is an intrinsic calibration, which gives the relations between the beams of the multi-beam sensor. This calibration depends on a model given by the constructor, but the model can be non optimal, which would bring errors and noise into the acquired point clouds. In the litterature, some optimizations of the calibration parameters are proposed, but need a specific routine or environment, which can be constraining and time-consuming. In this article, we present an automatic method for improving the intrinsic calibration of a multi-beam LIDAR sensor, the Velodyne HDL-32E. The proposed approach does not need any calibration target, and only uses information from the acquired point clouds, which makes it simple and fast to use. Also, a corrected model for the Velodyne sensor is proposed. An energy function which penalizes points far from local planar surfaces is used to optimize the different proposed parameters for the corrected model, and we are able to give a confidence value for the calibration parameters found. Optimization results on both synthetic and real data are presented.
INTRODUCTION
Light Detection and Ranging (LIDAR) sensors are useful for many tasks: mapping (Nuchter et al., 2004), localization (Narayana K. S et al., 2009) and autonomous driving (Grand Darpa Challenge, 2007) are some of the tasks where LIDAR sensors are useful. Multi-beam LIDAR sensors give data with a high density of points and are more precise than mono-beam sensors: they are also evolving fast, and become cheaper with time. To give accurate data, multi-beam sensors have an intrinsic calibration which needs to be done: generally, this calibration depends on the geometric disposition of the beams in the sensor. The calibration follows a model, which is given by the constructor: the model can be corrected in order to give more precise data. The different representations of each acquired point are illustrated with figure 1, with the different reference frames. The intrinsic calibration describe the transformation of the acquired from spherical coordinates to cartesian coordinates, referenced in the same reference frame. The optimization we are speaking of consists in finding some additionnal parameters for each beam of the LIDAR sensor. A beam is set as a reference, and we optimize the intrinsic calibration parameters of the other beams regarding this reference. The procedure is described with more details in section 3.. In this article, we will call calibration of the sensor the intrinsic calibration of the multi-beam LIDAR sensor: the intrinsic calibration of the sensor allows to have data correctly referenced in the cartesian sensor reference frame. The solution we propose is, Figure 1. Geo-referencement of the data after the acquisition, to estimate the parameters of the calibration that give the "best" -depending on some criteria -point cloud. We present an unsupervised calibration method for multi-beam LIDAR sensors, which does not need any calibration target.
This paper is organized as follow: in section 2., we present the state of the art concerning the algorithms for the intrinsic calibration of multi-beam LIDAR systems. Section 3. presents our optimization methods for the intrinsic calibration parameters. Section 4. shows some experimentation results obtained with our algorithm. Finally, section 5. finally gives a conclusion to this paper. Figure 1 shows our mobile mapping system, with a LIDAR sensor mounted on the roof which is the Velodyne HDL-32E: we give its specificities in section 3.. The figure also gives the different representations of an acquired point by the mapping system.
RELATED WORK
• Raw data are acquired by the multi-beam sensor mounted on the Vehicle, which are the distance of the point acquired to the sensor and two angles.
• The raw data can be expressed in the Cartesian reference frame of the sensor: this is done using the intrinsic calibration parameters of the sensor.
• The extrinsic calibration gives the geometric transformation between the sensor and the IMU -which are mounted on the mobile platform -and is needed to have coordinates registered in the navigation reference frame. There are six parameters to retrieve, three rotations and three translations.
• The data can also be geo-referenced by applying the transformation between the navigation reference frame and the world reference frame to these data. This transformation is given by the fusion of data from many sensors embedded on the vehicle, such as an IMU and a GPS.
The calibration of a LIDAR sensor is an important task, whether it has many beams or not. It allows the sensor to give correctly referenced data during the process of acquisition, which is necessary for many tasks, such as point clouds segmentation (Serna and Marcotegui, 2014) for example. In this section, we will talk about some of the intrinsic calibration techniques for multi-beam LIDAR acquisition systems.
Multi-beam LIDAR sensors can be separated into two categories: • Sensors made of several mono-beam LIDAR sensors, for which the data are fusionned and which provide 3D information with a specific calibration routine, such as the riegl sensor (Riegl LIDAR sensor datasheet, 2015).
Because mono-beam LIDAR sensors may be cheaper than multibeam, some 3D mapping system are constructed around several mono-beam sensors. These sensors also need to be calibrated, and some automatic algorithm exist. This is for example the case in (Sheehan et al., 2012), where the authors propose an automatic method for the self-calibration of a 3D-laser. The 3D laser is made of 3 mono-beam LIDARs SICK LMS-151 placed on a rotating plate, and for the self-calibration of the sensor, he measures the quality of the acquired point clouds and corrects the calibration parameters in consequence.
In (Lin et al., 2013), another automatic optimization for the calibration of a self-made multi-layer LIDAR sensor is proposed. He mounted a single-layer HOKUYO UTM-30LX LIDAR sensor on a pan-tilt unit, and estimated the new parameters induced by the pan-tilt unit by correcting the structure of planar surfaces which were not correctly planar with bad calibration parameters.
In this section, we will talk about the existing work on the intrinsic calibration of Velodyne sensors, first because these sensors are widely popular since 2007, but also because this is the kind of sensor we used for our experimentations. The Velodyne sensors appeared recently -around the year 2007 -, but we already can find some calibration techniques which are specific to this kind of sensor. Indeed, (Glennie and Lichti, 2010) and (Muhammad and Lacroix, 2010) propose an optimization of the intrinsic parameters for the 64-beam version, and (Chan and Lichti, 2013) proposes an intrinsic calibration for the 32-beam model. In (Glennie and Lichti, 2010) and (Muhammad and Lacroix, 2010), the authors use a particular calibration environment to optimize the intrinsic parameters, which contains many planar walls: these walls are extracted from the acquired point cloud, and their structure is corrected in order to optimize the calibration parameters. In (Chan and Lichti, 2013), the optimization of the intrinsic parameters is done statically, by using environment information such as planar wall and vertical cylinders. In (Chan and Lichti, 2015), the authors propose an extension of the method presented in (Chan and Lichti, 2013): they also correct the intrinsic calibration parameters in a kinematic mode, by correcting planar walls and culinders extracted from the point clouds.
In (Huang et al., 2013), the authors propose a full extrinsic calibration of a system made of a Velodyne 64 beams LIDAR sensor and an infra-red camera: they also optimize some intrinsic calibration parameters of the LIDAR sensor. They use a calibration target, and with the infra-red images, they have the impacts of the LIDAR sensor on the target. In (Atanacio-jiménez et al., 2011), the authors present an automatic algorithm to optimize the intrinsic and extrinsic parameters of a Velodyne HDL-64E sensor. A corrected model for the intrinsic parameters is proposed and the parameters are optimized to fit the model. All the optimization are done by using a calibration target. Finally, there is also another optimization for the intrinsic calibration parameters which is proposed in (Levinson and Thrun, 2010). For the optimization, the authors defined an energy function which penalizes points that are far away from planar surfaces extracted from the acquired data. For the intrinsic optimization, the authors start from an initial estimate, and iteratively compute values of their energy function by modifying the concerned intrinsic parameters in the neighborhood of the initialization. They use a grid search to optimize the parameters and reduce the size of the neighborhood at each iteration. The main problem is that the minimization can be long if a high precision is required. Also, because the neighborhood is a discrete space, it is possible not to reach the optimal solution.
To optimize the calibration parameters of the multi-beam sensor, we use an energy function which only needs information extracted from the acquired point clouds. No calibration target is used, and the process is unsupervised. The defined energy function is also minimized iteratively, as it is explained in section 3.. However, the differences with respect to existing methods are manyfold: • First, the energy is defined as the sum of the squared distance of each points to the closest plane it should belong to, and its expected optimal (minimum) value is related to the global covariance of the point cloud noise.
• We also introduce in the energy weights which exploit the local planarity of data.
• Our method leads to a more accurate calibration for the point cloud, and does not need a precise initialization.
• The numerical resolution is faster than existing methods, and is done in acceptable times.
• We give an analysis of the precision obtained for the calibration parameters with the resolution.
PROPOSED OPTIMIZATION METHOD
To do our acquisitions, we have a mobile mapping system, presented in figure 1. It is equipped with many sensors to precisely There is also the multi-beam LIDAR sensor, the 32-beam Velodyne, which is mounted on top of the vehicle, as shown on figure 1. The Velodyne sensor provides up to 700 000 points/s, and covers a vertical field of view of 40 • -from -8 • to 32 • -and an horizontal field of view of 360 • . Also, we know the vehicle global pose at each control point, given by the frequence of the fusion IMU+GPS, which is 100 Hz. For our need, we only use the Multi-beam sensor to acquire data, and for geo-referencing the data, we use information given by the proprioceptive sensors: the global position of the vehicle is retrieved by fusioning data from the IMU and the GPS. For the optimization of the intrinsic calibration parameters, we assume that the localization of the vehicle is properly provided by the navigation sensors presented before.
The points in a point cloud come from the combination of the acquisitions of each beam of the multi-beam LIDAR sensor: during the motion of the vehicle, adjacent beams on the sensor will acquire at different times points that belong to the same surface. Figure 2 shows the expected result: with a wrong calibration, points acquired by neighbor beams will not be co-linear, where with a good calibration, lines of points acquired by close beams will overlap. For the optimization of the intrinsic parameters, we suppose that the extrinsic calibration parameters are already optimized: the optimization is done with the algorithm presented in (Nouira et al., 2015), where an optimization method for the extrinsic parameters is detailed.
Definition of the energy function
To optimize the calibration parameters, we want to consider points which belong to planar surfaces and exploit the previous observation, which is that these surfaces are not exactly planar with a wrong calibration. We start with an initial calibration, and only use information extracted from the point clouds. We do not use any information on the point cloud beforehand, and relie on information obtained during the optimization: no particular data is needed, we suppose that with the density of the LIDAR sensor, points belong to locally planar surfaces. Eq. (1) gives the energy function we defined to optimize the calibration parameters: where: In equation 1, the other terms are: • B is a sample of the Velodyne sensor beams, with B ⊂ 0; 31 • N is half the number of neighbor beams to beam i taken into account • k iterates on a subset of the points of beam i • w i,j,k is a weight, which value is 1 depending on a threshold on the distance between points p i,k and m j,k .
• n i,k is the normal at point p i,k to the tangent plane to point p i,k .
• p i,k and m j,k are respectively the k th point of beam i, projected in the global reference frame and its nearest neighbor on beam j, also projected in the same reference frame.
• p i,k and m j,k are respectively the k th point of beam i, projected in the sensor coordinate frame and its nearest neighbor on beam j, also projected in the same coordinate system.
• Rnav and Tnav are respectively the rotation matrix and translation vector from the navigation reference frame to the global reference frame. These matrix and vector depend on the time of the acquisition, thus they change from a point to another.
• The energy we defined has a relation to physics: indeed, the energy unit is a square distance (m 2 ), since it is a sum of squared distances between two points. We suppose that the point cloud contains some noise coming from various sources -motion of the vehicle; errors from the navigation system; errors from the LIDAR sensor -, and that for each point taken into account in the calculation of the energy J, this noise is independent, centered, reduced and follows a normal distribution: with these hypothesis, the energy J follows a chi-squared distribution. Energy J gives an estimate of the variance σ 2 of the point cloud noise when Nt is big enough.
In this section, we will present the optimization of the intrinsic calibration parameters. The optimization of the energy will be detailed.
Optimization of the calibration parameters
For the intrinsic parameters optimization, we use the energy J defined in (1). Figure 3 gives an illustration of the acquisition of data. Indeed, the Velodyne HDL-32E is composed of 32 beams, which are placed on the same vertical plane. On figure 3 a), there is an example with 2 fibers. The fiber 15 is called "reference": we choose it as a reference because its vertical angle is equal to zero. For each acquisition, the sensor gives the following informations: • The vertical angle φi of each beam, regarding the reference fiber.
• The distance ρ i,k between the origin of fiber i and the acquired point k.
• The horizontal angle θi, which is introduced by the motion of the sensor.
The intrinsic calibration of the Velodyne 32-beams can be represented with three equations, which transforms the spherical coordinates of each acquired point into cartesian coordinates. The three equations for a point p' acquired by a fiber i at time t are given in equation (2): We want to correct the model for the intrinsic calibration presented in equation (2). Indeed, the model used by the constructor supposes that the sensor is perfect. We choose the following corrected model for each beam, which was presented in (Chan and Lichti, 2013): • Between each beam, it is supposed that there is the same vertical angle separation. We add an offset δφi on each vertical angle φi to correct little errors which could exist.
• All the beams are supposedly placed on the same vertical plane. An error of alignment can exist, and we add an offset δθi on the horizontal angle θi.
• We add an offset δρi on the distance ρi(k) between the origin of beam i and the acquired point k.
• Finally, all the beams are supposed to have the same origin, which is not obvious. We add a little vertical offset for each beam, which takes into account small errors due to differents origins.
All of the offsets we added to equation 2 give a new intrinsic transformation: A linearization at the first order gives the following equations for point p i : where: Then, for the calculation of these offsets (which are unknown), we first choose a fiber as a reference: this way, we reduce the number of degrees of freedom of the system, which allows us to find a unique solution for these offsets.The fiber chosen as a reference is the fiber 15 of the velodyne, which has a vertical angle of 0 • . We then use the linearization of equation (1) to optimize our intrinsic parameters: in total, there are 4 * 31 = 124 parameters to optimize. In the equation, these parameters appear in the terms p i,k and m j,k . We then have a linear least squares problem to solve, with the objective function: Rnav(m j,k ) * R(α, β, γ) * m 2,j,k Rnav(m j,k ) * R(α, β, γ) * m 3,j,k Rnav(m j,k ) * R(α, β, γ) * m 4,j,k Rnav(m j,k ) * R(α, β, γ) * 0 0 1 In eq. (4), we have i < j, otherwise the p i and m j terms are inverted, and i > 0 and j < 31; if not, the null vectors are not needed. The solution which minimizes the objective function (4) is the solution of the following linear system: with:
Validity of the calibrations
We defined in the previous sections an energy function that, with an optimization process, should give better calibration parameters for our points clouds. We discuss in this section about the condition that validate a calibration obtained with our optimization process. The value of the energy J should be small enough, under a threshold: as said in section 3.1, our energy follows a chi-squared distribution. A validation threshold at 97% is 3σ 2 , with σ 2 the variance of the point cloud noise. For example, for real data, the noise comes from different sources; with our mobile mapping system, we have a good precision, with a standard deviation for the noise around 5cm. It gives us a threshold of around 75 cm 2 for the value of energy J, in order to validate the calibration process.
We also define an error value for each category of intrinsic parameter, to characterize the difference between each offset of an intrinsic parameter and the associated ground truth when known. The error is the sum of the squares of the final offsets for the intrinsic parameters, which gives: In the ideal case, these errors should be close to 0 for each parameter. For a real point cloud, these errors should be small.
EXPERIMENTAL RESULTS
In this section, we will only present different calibration results on 3 point clouds: 1 is simulated, and 2 come from an acquisition in a real urban area. We tested the optimization on others simulated and real data, and obtained the same results in general.
Data used for the optimizations
The simulated data used is point clouds which represent an acquisition in a urban area. The environment is made of verticalto represent walls and façades -and horizontal -the ground, representing the road -planes. This data are used to validate our algorithms: indeed, for this kind of point cloud, we know the ground truth, which are the optimal intrinsic calibration parameters. To validate our optimization, some error is added to each calibration parameter that we want to retrieve, and the expected result is to have the δX as close to zero as possible. The simulated data which is made of 5 million points has the following features: Point cloud #1, which is presented in figure 4, is made of a ground and two vertical planes. The vehicle is doing a turn, and there is no variation of altitude in this point cloud.
The real data come from two different acquisitions, one in the city of Montbeliard, and the other in the city of Dijon, both in France. They are used to show some optimization results on data acquired in different environments: • Point cloud #2, presented in figure 5 is a point cloud, part of an acquisition in the city of Montbeliard, in France. The point cloud presented contains some turns, several façades and a small variation of altitude. The point cloud is made of 10 million points.
• Point cloud #3, presented in figure 6 is a point cloud, part of an acquisition in the city of Dijon, in France. This time, there is no turn during the acquisition, several façades and a little variation of altitude. The point cloud is made of 5 million points.
Datasets
In our experiments, we use the same information for both data, simulated and real. We have raw information from the sensor, which is composed of: • the position and orientation of the vehicle at a frequence of 100 Hz. This is the position of the IMU in the world reference frame, fused with other information from proprioceptive sensors, such as the GPS and the odometer.
• the coordinates of each acquired point in the spherical coordinate system of the sensor reference frame. Since we are optimizing the intrinsic calibration parameters, these data are necessary.
• the "beam" which acquired each point, since we work with a multi-beam sensor.
These information give us the position of the vehicle and its trajectory with a good observability: indeed, we only work on the calibration parameters of the acquisition system, and to have a well reconstructed point cloud at the end of the optimization, we need to precisely know the trajectory of the vehicle.
Implementation and algorithm parameters
The algorithm we presented was implemented in C++. The EIGEN library (Eigen library, 2015) was used for all the operations on matrices or vectors, and the FLANN library (FLANN library, 2015) (Fast Library for Approximated Nearest Neighbor) was used for the nearest neighbors search. The different algorithms run on a computer with a Windows 7 -64 bits OS, 32 GB of RAM and an intel core-i7 processor, with a clock up to 2.80 GHz.
Our algorithm was tested with synthetic and real urban data: for the synthetic data, the parameters were known precisely. For the real data, we have the simplified intrinsic calibration model, and we want to find little biases which correct the model. For both data, we started with initial intrinsic biases, arbitrarily chosen. In our algorithm, we have some parameters to set. We start with subsampling the data about 1 point out of 3, because the point clouds have a high resolution and it reduces the computation times and the use of memory, without changing the results. The number of neighbor beams for a beam bi was fixed to 4 (N=2). Concerning the weights w i,j,k , a threshold of 20 cm was chosen for the maximal distance dmax between a point p i,k and its nearest neighbor m j,k on the neighbor beam. The parameters were fixed for all the tests which were done: different values were tested, but the ones presented give both good optimization results and computation times.
Optimization of the intrinsic calibration parameters
In this section, we will present the results of the optimization of the intrinsic calibration parameters as presented in section 3.. We will show the robustness of our optimization method and the improvements on the point cloud.
4.4.1 Results on simulated data. For the simulated data, and because the intrinsic calibration parameters are optimal -this is the way the simulated data are constructed -, we added some biases to the calibration parameters, and compared the results between the optimization of the extrinsic calibration parameters only, and the optimization of all the calibration parameters. The errors added to the intrinsic parameters were between -3 • and 3 • for the angles parameters (φ and θ), and between -10 cm and 10 cm for the distance (ρ and Hz). We show the optimization results for point cloud #1; we expect final biases as close to zero as possible. Figure 4 presents the same point cloud, on top with bad intrinsic calibration parameters and at the bottom with corrected parameters after our optimization. We see that with the optimization of the intrinsic calibration parameters, we have improved the quality of the point cloud: the plans are correctly planar after the optimization. Figure 10 gives the evolution of the energy through the iterations, and we can see that the energy decreases from a value of 404.58 cm 2 to a value of 0.28 cm 2 . Also, we show the evolution of the total weight used to normalize the energy, which has the same evolution as the energy: when the energy decreases, the total weight increases, until the energy converges. It shows that the optimization improves the structure of the point cloud and correctly register the data acquired by the different beams, as explained in section 3.1. Finally, table 11 gives the errors we defined in section 3.3 before optimization, and after optimization of the intrinsic parameters: we can see that the errors are smaller after the optimization, and close to 0 as expected. The final energy is small, and the final intrinsic biases are close to 0, which validate the results of our optimization. For the computation time of the optimization of the intrinsic calibration parameters, we have for point cloud #1 a computation time of 5 minutes, which is acceptable regarding the high number of parameters optimized.
4.4.2 Results on real urban data. In this section, we will present some results on real urban data. For the real data, we do not know the ground truth: we suppose that the simplified model can be corrected, and expect small biases to add to the model, as presented in section 3.. With our tests, we have seen that there is no visual improvements when there is no intrinsic parameters errors added at the start of the optimization; still, the energy is reduced a little and the final biases for the intrinsic parameters are small, under 10 cm for the translation ones and under 1 degree for the rotation ones. For the presented optimizations results, we added some important errors to the intrinsic parameters -to visu- Figure 10. Evolution of the energy of synthetic point cloud #1. In blue, we show the evolution of our energy during the optimization; in red, this is the number of paired points at each iteration for our optimization method ally show the improvements on the structure of the point cloudand the expected result was smaller errors, which improved the structure of the point clouds. Figure 8 shows the improvements on point cloud #2 with our op-timization: on top, this is the point cloud with the added errors, and at the bottom with optimized intrinsic parameters. Figure 12 gives the evolution of the energy with the iterations for point cloud #2: we see that the energy has an important decrease, from a value of 387.39 cm 2 to a value of 55.03 cm 2 , which shows that the structure of the point cloud has been improved, and which can validate the optimization result. Finally, table 13 gives the final errors for the optimized intrinsic parameters: as expected, we can see that with the optimization, we have smaller intrinsic biases. We have the same observations for point cloud #3: figure 9 shows the improvements on the point cloud, and figure 14 the evolution of the energy with the iterations. Table 15 shows the same results as for point cloud #2. Finally, the computation times are longer for the real point clouds, because there is more iterations for the optimization. We have respectively for point cloud #2 and #3 computation times of 25 minutes, and 12 minutes.
CONCLUSION
We presented in this paper a novel method for doing the automatic optimization of the intrinsic calibration parameters of a terrestrial LIDAR system, in a post-processing application. We correct the intrinsic model of a multi-beam LIDAR with an optimization problem. The optimization process we use is robust to large initial errors, as showed with the optimization results: it gives corrected calibration parameters and a well-structured point cloud, where the global noise is reduced. Also, we presented results on real point clouds acquired by a Velodyne multi-beam sensor: our optimization can be applied to any multi-beam LIDAR sensor configuration, as long as there is overlapping data between the beams.
|
v3-fos-license
|
2024-04-17T15:31:08.821Z
|
2024-04-01T00:00:00.000
|
269164301
|
{
"extfieldsofstudy": [
"Medicine",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1099-4300/26/4/333/pdf?version=1713086107",
"pdf_hash": "907f4c66dd33322796d8241500d5ebbcf58cbbf0",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:823",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "729d26a8717abfb8af4045a15630f73a59955455",
"year": 2024
}
|
pes2o/s2orc
|
Dynamical Tunneling in More than Two Degrees of Freedom
Recent progress towards understanding the mechanism of dynamical tunneling in Hamiltonian systems with three or more degrees of freedom (DoF) is reviewed. In contrast to systems with two degrees of freedom, the three or more degrees of freedom case presents several challenges. Specifically, in higher-dimensional phase spaces, multiple mechanisms for classical transport have significant implications for the evolution of initial quantum states. In this review, the importance of features on the Arnold web, a signature of systems with three or more DoF, to the mechanism of resonance-assisted tunneling is illustrated using select examples. These examples represent relevant models for phenomena such as intramolecular vibrational energy redistribution in isolated molecules and the dynamics of Bose–Einstein condensates trapped in optical lattices.
Introduction
The phenomena of dynamical tunneling (DT), introduced by Davis and Heller nearly four decades ago [1] and anticipated in earlier studies, is associated with processes that are classically forbidden but occur quantum mechanically.The reader may consult ref. [2] for a detailed historical perspective on dynamical tunneling.More importantly, such processes in classical dynamics may be energetically allowed yet dynamically forbidden.Thus, while no barriers may be apparent in the configuration space, dynamical barriers can and do exist in the full phase space of the system.Quantum dynamics can then mix near-degenerate states via tunneling through such dynamical barriers.A profound consequence is the proliferation of quantum pathways that open up for the dynamical evolution of an initial quantum state.As a consequence, substantial differences between the classical and quantum dynamics can emerge in the limit of "sufficiently long" timescales.
Interestingly, although it is a purely quantum effect, it has been established that the nature of the classical phase space has a strong influence on DT.Indeed, the proposed mechanisms typically invoke specific structures in the phase space.For example, the importance of nonlinear resonances, chaos, Kolmogorov-Arnold-Moser (KAM), and partial barriers in the multidimensional phase space has been clearly established.Thus, the resonance-assisted (RAT) [35][36][37][38][39][40] and chaos-assisted tunneling (CAT) [41,42] mechanisms have been studied rather extensively in the context of Hamiltonian systems with two degrees of freedom.The former is relevant in near-integrable systems wherein neardegenerate quantum states can mix due to the presence of specific nonlinear resonances.For smaller effective h the near-degenerate states can be coupled via a multitude of nonlinear resonances [37,39].In the case of CAT, one invokes a coupling between the symmetric regular islands and the chaotic sea.The chaotic sea is modeled via random matrix theory with a typical three-level scenario that involves avoided crossing of the tunneling doublet with a chaotic state [41,43].In mixed systems, one needs to invoke [36] both RAT and CAT in general for a proper quantitative description of DT.A comprehensive review of the different mechanisms and their interplay can be found in the articles in the two edited volumes [44,45].On the other hand, Shudo and coworkers have recently suggested [46][47][48] that in the limit of "ultra" near-integrable systems, enhancements in tunneling probabilities may not correspond to any classical phase space structure and a careful look at the complex phase space dynamics is necessary.Nevertheless, in this review, the former viewpoint is taken for a couple of reasons.First, typical physical systems are far from the ultra nearintegrable limit and have mixed regular-chaotic phase space.One anticipates that the extent of modulation of DT due to the phase space features will outweigh any purely quantum contribution.Second, as exemplified by the RAT and CAT studies, identifying specific phase space structures as dominant contributors allows for a predictive semiclassical theory [2,41,48].
Despite extensive investigations of DT over the past few decades, the fact remains that to date, very few studies have been performed for systems with three or more degrees of freedom [49][50][51][52].There are several reasons for this.Chief among them has to do with the nontrivial change in the phase space topology, and hence transport, in going from f = 2 to f ≥ 3 Hamiltonians.As is well known, whereas mixed regular-chaotic phase spaces can manifest in both cases, in the f ≥ 3 case, the chaotic regions are no longer disconnected.Specifically, the chaotic regions associated with the destroyed separatrices of the various nonlinear resonances form an intricate network known as the Arnold web.It is thus possible for "distant" regions in the phase space to be connected by purely classical transport.One such mechanism for phase space transport in the near-integrable limit is known as Arnold "diffusion", which is expected to occur on exponentially long timescales.It should be noted here that the transport in the connected chaotic layer due to the Arnold mechanism is not necessarily a normal diffusive process over moderately long timescales.Thus, associating a diffusion constant with the process is questionable [53,54].One may perhaps argue that the Arnold diffusion timescale is much longer than DT (or, for that matter, any physically relevant) timescale, and hence of limited interest-a sentiment already expressed by Davis and Heller [1] when they concluded their classic study by saying that "Identification of dynamical tunneling in multidimensional systems may be a matter of comparing a small flow classically to a large quantum mechanical coupling".However, for f ≥ 3, Arnold diffusion is not the only mechanism that leads to transport.In the nonintegrable limit, there are indications [50,[55][56][57][58][59][60][61] that it is possible to have different mechanisms that lead to relatively faster exploration of the phase space.It is important to note that many of these mechanisms are operative only for systems with f ≥ 3 since they require the connected Arnold web structure.One key example involves a feature on the Arnold web known as a resonance junction wherein several independent resonances can intersect on the constant energy surface.Depending on the coupling regime, the junctions can give rise to local pockets of chaos.Classical trajectories can also become trapped for a finite amount of time leading to several interesting and dynamically relevant consequences [62][63][64][65][66][67][68].Similarly, concepts such as trapping due to partial barriers and "sticky" dynamics in f ≥ 3 have been investigated [62,[69][70][71][72][73] over the past decade in some detail.The jury is still out, but the indications are that there are substantial mechanistic differences in the transport mechanism for f ≥ 3 when compared to the fairly well-understood f = 2 case.
The key question, then, is to what extent are the proposed low-dimensional mechanisms for RAT and CAT valid for f ≥ 3 systems?Does this connectedness of the phase space lead to novel mechanisms for DT?That there must be some nontrivial consequences for DT is evident from a very early, and possibly the first, study [49] on a model f = 3 system.There, it was explicitly shown that quantum state mixing due to RAT can be clearly understood from the structures on the Arnold web.More recently, it has been shown [52] using a model 4D map that for f ≥ 3 one should anticipate the tunneling enhancements to show complicated peak structure due to the presence of resonance junctions (double or rank-2 resonance) and even drastic suppression of tunneling.The importance of resonance junctions to DT has also been brought up recently in model Hamiltonians relevant for IVR [51] and trapped ultracold atoms [19].Thus, although there is some progress, answering the questions posed above in general presents several challenges.First, visualization of the phase space is nontrivial but necessary to some extent in order to ascertain the local structures present near the location of the initial quantum state of interest.There has been some progress recently in this regard [74,75].Second, constructing the Arnold web at a level of detail concomitant with the effective Planck constant h (see next section) is a numerically demanding task.Thirdly, given that the resonances are dense everywhere on the web, an estimate of the classical transport timescale connecting two or more quantum states that are involved in DT is needed.This is important to establish if the state mixing is purely quantum mechanical or not.Substantial progress [53,60,[76][77][78] has been made recently in terms of estimating the timescale for various model Hamiltonians in the context of Arnold diffusion and Nekhoroshev stability.However, attempts to adapt such techniques to models wherein DT can occur are still lacking.
In the context of the remarks made above, several studies have focused on searching for explicit quantum signatures of the novel phase space transport mechanisms.For instance, Martens analyzed [79] a model three-resonance Hamiltonian to see if the excited eigenstates are delocalized along the resonance channels.Although such delocalized eigenstates were observed, whether or not one can associate them with the Arnold diffusion alone was not clear.In fact, a recent study [51] on the same system (see next section) shows that the observed delocalization can also be due to extensive dynamical tunneling.Leitner and Wolynes quantized the three-resonance model (also known as the stochastic pump model) and noted [80] the equivalence to transport along a disordered wire.Consequently, for any finite value of h, quantum localization was predicted.Importantly, localization length was shown to scale as h−3 , and arguments were provided for the possibility of enhanced transport near the intersection of two independent resonances on the Arnold web.Manifestation of Arnold diffusion in quantum systems has also been studied by Malyshev and coworkers in a series of papers [81][82][83].It was concluded that if the density of states inside the chaotic layers is large enough (so-called Shuryak border) then quantum Arnold diffusion can occur.Note that an example [84] in the molecular context also indicates that quantum selection rules may limit the extent of diffusion.However, it has also been pointed out [82] that this threshold may not be crucial in driven systems.Indeed, extended diffusion has been observed in a driven two-dimensional optical lattice [85] model.Nevertheless, as concluded by Leitner and Wolynes earlier [80], "quantum" Arnold diffusion is weaker than the classical counterpart due to quantum localization effects.The fact that a combination of quantum localization and novel classical transport can have profound effects has been brought out very nicely by the Dresden group.For example, Stöber et al., in their recent study [86] on coupled kicked rotors, have shown that partial barriers in 4D maps are more restrictive for quantum transport when compared to the 2D maps.A further example comes from the work of Schmidt et al. wherein, using a "synthetic" Hamiltonian, it has been argued [87] that classical drift along a sufficiently wide resonance channel can destroy quantum localization.Consequently, quantum dynamics ensuing from an initial quantum state can explore large regions of the Arnold web.Please note that such extensively delocalized eigenstates have been observed earlier [51] in the context of the Martens model.Moreover, studies [88,89] do indicate that nonlinear interactions can destroy quantum localization.
In what follows, a few of the models are presented along with the key observations.The discussions are by no means exhaustive and certainly no replacement for the original literature, but they do highlight the complexity of DT in f ≥ 3 cases.The review ends with a partial list of questions that remain unanswered.
Arnold Web: Definition, Construction, and Examples
Given the importance of the Arnold web to DT in f ≥ 3 systems, it is imperative to start with a definition of the web and the generic features.For this purpose, consider a general Hamiltonian of the form H(J, θ) = H 0 (J) + ϵV(J, θ) (1) with (J, θ) ≡ (J 1 , J 2 , . . ., J f , θ 1 , θ 2 , . . ., θ f ) being the action-angle variables of the f -degrees of freedom system.The zeroth-order part H 0 is assumed to be non-degenerate and integrable.The perturbation is denoted by V(J, θ) with ϵ representing the strength of the perturbation.Typically, for ϵ ̸ = 0, the system is nonintegrable, and depending on the perturbation strength, the phase space may vary from being near-integrable to strongly chaotic.Please note that in many instances, one may not be able to explicitly determine the canonical transformations that bring the Hamiltonian to the above form.Nevertheless, for near-integrable systems and in the context of RAT, the Hamiltonian in Equation ( 1) is an appropriate starting point.As we see below, the classical limit Hamiltonians corresponding to the Bose-Hubbard model for trapped cold atoms and effective spectroscopic model for molecules are naturally of the form considered.Moreover, the correspondence J k ↔ (n k + µ k /2)h between the classical actions and the quantum numbers n k , with the associated Maslov index µ k , provides a convenient platform to compare and contrast the classical and quantum dynamics.From a zeroth-order perspective, one can then define the nonlinear frequencies which depend on the actions due to the condition of non-degeneracy.The various frequencies can satisfy commensurability conditions of the form f ) being an integer vector.The condition in Equation (3) represents a nonlinear resonance in the action space of order k | with a width scaling as √ ϵ and exponentially with the order.Typically, low-order resonances dominate the early time dynamics, whereas high-order resonances become important for longer periods.In the quantum context, one must also compare the effective h with the resonance width to assess the importance of the specific resonance to the dynamics.Interestingly, this dynamical hierarchy of the resonances in terms of their order plays a crucial role in modeling the IVR dynamics in large polyatomic molecules [90].
The resonances defined by Equation (3) are hypersurfaces in the action space that can intersect the constant zeroth-order energy surface H 0 (J) = E.For f = 2 the intersections are at isolated points, whereas for f ≥ 3, the resonances are no longer isolated, and as seen in Figure 1a, giving rise to a connected network of resonances known as the Arnold web.In Figure 1b, an enlarged portion of the web is shown as an example to indicate that the resonances are dense everywhere and form several resonance junctions.Thus, the resonances of various orders form an intricately connected region over which classical and quantum transport can occur.This aspect implies that any initial state is bound to be under the influence of several resonances.Nevertheless, as indicated in the introduction, one anticipates that only resonances up to a certain maximal order might be relevant for the timescale of interest.What determines this maximal order?Nekhoroshev's theory [91] is ideally suited for answering this question.In this approach, one restricts attention to resonances up to a maximum order O α = K(ϵ).Thus, as sketched in Figure 2 for f = 3, the Arnold web can now be divided into various domains.The no-resonance domain comprises all points in the action space that are sufficiently far from resonances of order K.In this case, the Hamiltonian is integrable, and frequencies do not vary with time except for an exponentially small diffusion caused by resonances of order higher than K.In the single resonance domain, it is possible to transform the Hamiltonian to an integrable single resonance of order O α ≤ K.One has a fast bounded drift transverse to the resonance line.In the double resonance domain, one has two independent resonances intersecting to form a junction, and the resulting system is nonintegrable.The chaotic motion is bounded and can happen in the region around the junction.Please note that for f > 3 one has resonance planes, and it is possible to have m ≤ f − 1 independent resonances that can intersect to form rank-m (or multiplicity-m) junctions.The above picture of working with a finite set of resonances leads to Nekhoroshev's famous stability estimate.For ϵ ≪ 1, an initial condition (J(0), θ(0)) on the Arnold web satisfies for 0 < ϵ < ϵ 0 and times with (J 0 , t 0 ) being positive constants.The stability exponents are estimated to be a = b = 1/2( f − m) in the resonance domains.The bound in Equation ( 4), apart from indicating stability on an exponential time scale, also implies increased stability near resonance junctions.Interestingly, the stability increases with the increasing rank of the junction.However, in the present context, the relevant timescale is that of DT, which is determined by the precise set of resonances that mediate RAT.Thus, given the sensitivity of RAT to even fairly weak and high-order resonances, an a priori knowledge of the maximal order is not obvious.In addition, for systems with small effective h, the density of near-degenerate states is high near junctions with possible involvement of CAT due to the bounded chaos in the vicinity of the junctions.In fact, provided CAT is occurring, the DT timescale may be considerably short, and hence, interesting competition between classical and quantum transport may manifest near the junctions.The zeroth-order picture is valid for ϵ ≪ 1, and with increasing perturbation strength, the resonances widen, leading to overlaps and the generation of large regions of chaos.The system transitions from the Nekhoroshev regime to the Chirikov regime, as shown in Figure 3 for the example of a model Hamiltonian [94].In the deep Chirikov regime, there is perhaps no meaningful way to define DT, a statement that is true even in the f = 2 case.Theoretically, the former regime, flanked by the Kolmogorov-Arnold-Moser (KAM) and Chirikov regimes, is fairly narrow, and one may rightfully question if a typical physical system can be in such a regime.However, as noted [95] by Morbidelli and Froeschlé, in practice, there is a wider range of ϵ value which characterizes the Nekhoroshev regime.Consequently, studies on DT in model systems in the vicinity of rank-m junctions are relevant.
tion of the unpering is predicted by itions in the neighf invariant unperthat satisfy a resowith some integers a suitable accuracy (11) ⌺ i k i .Theresuch a set, which is motions of the sysures.old web is peculiar.he frequency space , the Arnold web atisfying ⌺ i k i i ϭ at decreases with re, it is open and ation is suitably ive measure.This nalytically in (3) ve conditions (ese of the perturbarous proof of the nd irregularity in te, not completeysically interestssful approaches vestigations (12).sics, the question integrable Hamilnse of the KAM ause for the mas it provides stand describes moactions (13), there o remain as close mputed orbits in eraction between nd not-yet-solved the solar system, whether the orbits of a significant ill change or not evious work has pplications of the 6 ).Here we give n of the Arnold erical test of regthe system, with fore.with the following ϭ 2 (0) ϩ I 2 t, 3 (t) ϭ 3 (0) ϩ t rotate with constant angular velocity.Therefore, each couple of actions I 1 , I 2 characterizes an invariant torus T 3 , and all motions on the considered torus are quasi-periodic with frequen-cies 1 ϭ I 1 , 2 ϭ I 2 , 3 ϭ 1. Conversely, for any small ε different from zero, H ε is not expected to be integrable.However, we expect that the KAM theorem applies, and consequently the phase space is filled by a large set that is a small deformation of the unperturbed one.Conversely, nothing is predicted by KAM theory for initial conditions in the neighborhood of the set made of invariant unperturbed tori with frequencies that satisfy a resonance condition ⌺ i k i i ϭ 0 with some integers (k 1 , . . ., k n ) ʦ Z n ,0گ within a suitable accuracy that increases with the order (11) ⌺ i k i .Therefore, in the neighborhood of such a set, which is called the Arnold web, the motions of the system can exhibit chaotic features.
The topology of the Arnold web is peculiar.To describe it, we resort to the frequency space 1 , . . ., n .In this space, the Arnold web projects on the frequencies satisfying ⌺ i k i i ϭ 0 with a neighborhood that decreases with the order ⌺ i k i .Therefore, it is open and dense, and if the perturbation is suitably small, it has a small relative measure.This structure was explained analytically in (3) but only for very restrictive conditions (especially on the magnitude of the perturbation).In addition, the rigorous proof of the existence of instability and irregularity in the Arnold web is a delicate, not completely solved problem.For physically interesting systems, recent successful approaches are based on numerical investigations (12).In different fields of physics, the question of the stability of quasi-integrable Hamiltonian systems in the sense of the KAM theorem is important, because for the majority of initial conditions it provides stability for infinite times and describes motions.In beam-beam interactions (13), there is the problem of having to remain as close as possible to given computed orbits in order to indeed have interaction between particles.Within the old and not-yet-solved problem of the stability of the solar system, it is not completely clear whether the orbits of some planets (14 ) and of a significant number of asteroids (15) will change or not in an important way.Previous work has been based on numerical applications of the frequency-map analysis (16 ).Here we give a graphical representation of the Arnold web, obtained with a numerical test of regularity of the solutions of the system, with a sharpness never seen before.
We consider a system with the following Hamilton function where I 1 , I 2 , I 3 ʦ R and 1 , 2 , 3 ʦ S are canonically conjugated (17), and ε is a parameter that the larger it is, the more perturbed the problem becomes.The canonical equations of the integrable Hamiltonian H 0 are integrated: I 1 , I 2 , I 3 stay constant while the angles at time t 1 (t) ϭ 1 (0) ϩ I 1 t, 2 (t) ϭ 2 (0) ϩ I 2 t, 3 (t) ϭ 3 (0) ϩ t rotate with constant angular velocity.Therefore, each couple of actions I 1 , I 2 characterizes an invariant torus T 3 , and all motions on the considered torus are quasi-periodic with frequen- Conversely, for any small ε different from zero, H ε is not expected to be integrable.However, we expect that the KAM theorem applies, and consequently the phase space is filled by a large set that is a small deformation of the unperturbed one.Conversely, nothing is predicted by KAM theory for initial conditions in the neighborhood of the set made of invariant unperturbed tori with frequencies that satisfy a resonance condition ⌺ i k i i ϭ 0 with some integers (k 1 , . . ., k n ) ʦ Z n ,0گ within a suitable accuracy that increases with the order (11) ⌺ i k i .Therefore, in the neighborhood of such a set, which is called the Arnold web, the motions of the system can exhibit chaotic features.
The topology of the Arnold web is peculiar.To describe it, we resort to the frequency space 1 , . . ., n .In this space, the Arnold web projects on the frequencies satisfying ⌺ i k i i ϭ 0 with a neighborhood that decreases with the order ⌺ i k i .Therefore, it is open and dense, and if the perturbation is suitably small, it has a small relative measure.This structure was explained analytically in (3) but only for very restrictive conditions (especially on the magnitude of the perturbation).In addition, the rigorous proof of the existence of instability and irregularity in the Arnold web is a delicate, not completely solved problem.For physically interesting systems, recent successful approaches are based on numerical investigations (12).In different fields of physics, the question of the stability of quasi-integrable Hamiltonian systems in the sense of the KAM theorem is important, because for the majority of initial conditions it provides stability for infinite times and describes motions.In beam-beam interactions (13), there is the problem of having to remain as close as possible to given computed orbits in order to indeed have interaction between particles.Within the old and not-yet-solved problem of the stability of the solar system, it is not completely clear whether the orbits of some planets ( 14) and of a significant number of asteroids (15) will change or not in an important way.Previous work has been based on numerical applications of the frequency-map analysis (16 ).Here we give a graphical representation of the Arnold web, obtained with a numerical test of regularity of the solutions of the system, with a sharpness never seen before.
We consider a system with the following Hamilton function where I 1 , I 2 , I 3 ʦ R and 1 , 2 , 3 ʦ S are canonically conjugated (17 ), and ε is a parameter that the larger it is, the more perturbed the problem becomes.The canonical equations of the integrable Hamiltonian H 0 are integrated: I 1 , I 2 , I 3 stay constant while the angles at time t 1 (t) ϭ 1 (0) ϩ I 1 t, 2 (t) ϭ 2 (0) ϩ I 2 t, 3 (t) ϭ 3 (0) ϩ t rotate with constant angular velocity.Therefore, each couple of actions I 1 , I 2 characterizes an invariant torus T 3 , and all motions on the considered torus are quasi-periodic with frequen- Conversely, for any small ε different from zero, H ε is not expected to be integrable.However, we expect that the KAM theorem applies, and consequently the phase space is filled by a large
Construction of the Arnold Web
There are several methods to numerically construct the Arnold web.The essence is to use a measure that can unambiguously distinguish between non-resonant KAM tori, resonance zones, and chaotic regions.Although computing the Lyapunov exponents would be ideal, the numerical overhead is rather large.Consequently, there is considerable interest in numerical approaches that are relatively fast and modest in efforts to map out vast regions of the Arnold web quickly.This would then allow for further studies on the classical transport timescales and comparison with quantum dynamics of specific initial states.
The example shown in Figure 3 is constructed using the method of fast Lyapunov indicator (FLI).The advantage of using the FLI is that one can use finite time dynamics to distinguish between the different dynamical regions.A review of the FLI approach can be found in the original literature.The FLI belongs to a class of variational methods and other measures such as the orthogonal [96] FLI (OFLI), mean exponential growth of nearby orbits [97] (MEGNO), small/general alignment index [98] (SALI/GALI), and relative Lyapunov indicator [99] (RLI) has been proposed.We refer the reader to the review [100] for a comparison of the different chaos indicators and a recent compendium of articles [101] for further information.More recently, Giordano and Cincotta have introduced [102] the Shannon entropy as an efficient measure to construct the Arnold web.Other approaches like the maximum eccentricity-based method [103], frequency map analysis [56,62,104] and wavelet-based measures [105][106][107] provide a fairly powerful approach for construction of the Arnold web.
It is worth noting that, to date, the focus has been on mapping the Arnold web for f = 3 Hamiltonian systems.In this case, one can still project the web on an appropriate twodimensional space, for example, the two independent frequency ratio space [62].However, for f > 3 the situation becomes more complicated, and a different approach, such as the one proposed [108] by Fuji and Toda, may prove useful.In addition, anticipating the increased numerical effort, techniques such as the one based on using the graphic processing unit [109] and Lyapunov weighted dynamics [110] might be more appropriate.
Martens' Three-Resonance Model
The model Hamiltonian introduced by Martens [79] is a fairly good one to study various aspects of DT.The quantum Hamiltonian is given by where the perturbation terms are given by The operators a i , a † i , and a † i a i are the destruction, creation, and number operators.One can imagine the model in Equation ( 6) to be an effective "rotating-wave" limit approximation to a more general Hamiltonian.The mode frequencies and anharmonicities are denoted by ω i and α i , respectively .The zeroth-order quantum states are the Fock states |n 1 , n 2 , n 3 ⟩ with the associated zeroth-order energies E 0 n .The dynamics of the various initial Fock states can then be studied for a wide range of coupling strengths using measures such as the inverse participation ratio and survival probability.Such a detailed study is described in the recent review [7].However, the focus here is on a specific class of initial states that are involved in DT.Therefore, it is important to study classical dynamics since otherwise, it is not possible to unambiguously associate DT with quantum dynamics.For this purpose, the classical limit of the Hamiltonian Equation ( 6) is constructed using the Heisenberg correspondence The classical Hamiltonian can be expressed as It is easy to check that the above Hamiltonian is a f = 3 system since there are no conserved quantities except the total energy.Using Equation (2), the three different resonance planes and their intersection with the constant energy surface H 0 (J) ≈ E yields the zerothorder Arnold web.For concreteness, at this stage we choose the zeroth-order Hamiltonian parameters to be (ω 1 , ω 2 , ω 3 ) = (1.1,1.7, 0.9) and (α 1 , α 2 , α 3 ) = (−0.0125,−0.02, −0.0085) in scaled units.The parameters are essentially chosen so that various structures on the Arnold web at the energy of interest can manifest.Thus, by varying the parameters of the zeroth-order Hamiltonian, one can "engineer" different scenarios in terms of the location of the single resonances and the total number of resonance junctions.An example is shown in Figure 4 wherein the different web structures with changing ω 2 and E can be clearly seen.For example, at E = 20 and ω 2 = 1.3, the resonance planes do not manifest, and hence there is no web structure expected.On the other hand, for ω 2 = 1.5 and E = 40, all three resonances can be seen, and one of the junctions appears around (J 1 , J 2 ) ≈ (28, 0), i.e., at the "edge" of the action space.Similarly, at ω 2 = 1.9 the resonances R 3 and R 1 intersect for E = 30, whereas they do not intersect for E = 20.We mention, without going into details, that there are certain conditions known as steepness in order for the Nekhoroshev theorem to hold.The Martens' model does not satify the steepness condition.However, suffice it to note that for the parameter choices made, our system is quasi-convex in the action region of interest.The zeroth-order analysis and expectation can be made more precise by numerically mapping the Arnold web using the FLI technique.Details of the computation can be found in the earlier publication [51].Briefly, a large grid of initial conditions on the (J 1 , J 2 ) plane is selected for a specific angle slice.The action J 3 is selected using energy conservation, and the resulting ensemble of initial conditions is propagated to a sufficiently large time so that the FLI can clearly distinguish between the different dynamical behaviors.As an example, the Arnold web for a total energy E ≈ 40 is shown in Figure 5, indicating the existence of two prominent resonance junctions labeled A and B. These arise from a crossing of the r (3) ≡ (0, 1, −2) (denoted as R 3 in Figure 5) with the r (1) ≡ (2, −1, 0) (denoted as R 1 ) and r (2) ≡ (3, −2, 0) (denoted as R 2 ) resonances, as also predicted by the zeroth-order analysis in Figure 4.Note that the web is sparse because we have picked a model with exactly three primary resonances Alternatively, one can think of the Hamiltonian arising from restricting the resonancesto a maximum order, as in the Nekhoroshev approach.A few points are worth noting.First, Figure 5 is not close to the Chirikov limit yet.Nevertheless, the chaotic regions near the rank-2 resonances are evident (see the inset).Second, the two junctions are well separated, which is ideal for investigating RAT in such regions.Third, previous work [7] has shown that the dynamics near the two junctions are quite different.With the Arnold web structure established for the given energy and coupling values, we can now study the RAT mechanism far from and near a specific resonance junction.In Figure 6, we show the quantum and classical dynamics of the initial Fock state |n⟩ = |22, 1, 19⟩ in terms of the survival probability The location of the initial state on the Arnold web is also shown in the figure .Clearly, the initial state is in the vicinity of the R 1 resonance and away from the junction.Figure 6 shows that this is DT mediated by the nonlinear resonance since the classical dynamics is localized.This is also clear from the classical dynamics projected onto the Arnold web (green dot).In contrast to the classical dynamics, the quantum counterpart shows coherent oscillations with a period of about T∼1300 (∼400T 2 in terms of the harmonic mode time period T 2 = 2π/ω 2 ) and nearly mimics a two-state Rabi oscillation.Further analysis shows that the second state involved in the quantum dynamics is |20, 2, 19⟩ which lies nearly symmetric about the R 1 resonance center line (indicated by a red dot on the Arnold web in Figure 6).Further analysis shows that the observed DT can be accounted for using the RAT theory [35] involving the R 1 resonance.How does the above single resonance picture change if the initial Fock state is located close to the R 1 − R 3 resonance junction?Given the fact that an infinity of resonances of various orders exists at the junction, one anticipates a more complicated picture when compared to the above single resonance case.This is illustrated in Figure 7 for the dynamics of the initial state |25, 4, 9⟩.For low coupling strengths Figure 7a shows that there are three other states |s 1 ⟩ = |23, 5, 9⟩, |s 2 ⟩ = |25, 3, 11⟩, and |s 3 ⟩ = |23, 3, 13⟩ that mix significantly.These states do not mix classically, and hence, quantum mixing is an example of DT.The mixing between the initial state and the states |s 1 ⟩ and |s 2 ⟩ can be associated with RAT mediated by the resonances R 1 and R 3 , respectively.However, the state |s 3 ⟩ mixes coherently on a timescale of ∼10,000 T 2 .This is a clear influence of the junction since one can show that the (2, 0, −2) induced resonance at the junction mixes |s 2 ⟩ and |s 3 ⟩.This induced resonance is visible in Figure 7a, and the timescale is much longer due to the effective coupling strength being τ 1 τ 3 ∼10 −9 , i.e., nearly four orders of magnitude smaller than the primary resonances.Despite this, it is observed that the populations of all three states involved are nearly the same (∼15 %) at ∼10,000 T 2 .A more surprising and key aspect of the influence of a junction on DT occurs upon increasing the resonance strengths.As shown in Figure 7b, increasing the R 1 and R 3 resonance strengths leads to many more states that mix due to DT.However, some of the states mix classically as well.In fact, states |s 1 ⟩ and |s 2 ⟩ are now classically connected on timescales similar to the quantum.However, the quantum probabilities are larger and inverted relative to the classical result.Moreover, new states like |s 4 ⟩ = |21, 5, 11⟩ gain significant populations (∼30%) within a timescale of about ∼800 T 2 whereas the state |s 3 ⟩, although still mixing solely due to DT, only gains about ∼5%.Note that the suppression in the population of |s 3 ⟩ happens despite the effective resonance strength being nearly an order of magnitude larger than in Figure 7a.Perhaps this suppression comes about due to the "canceling paths" proposed in the recent work [52] of Firmbach et al.Confirming this requires further study in terms of an appropriate effective Hamiltonian near the junction of interest.It is expected that variation of the effective h can lead to a better understanding of the results in Figure 7b.However, note that this is numerically challenging since the density of states increases rather rapidly for the Martens' model.Thus, for heff ∼0.01 one may need to diagonalize very large matrices even for restricting attention to eigenstates in a narrow energy range.For example, with h = 1, there are nearly 150 near-degenerate states for ∆ E ∼0.1 around E = 40.In any case, the model Hamiltonian in Equation ( 9) needs further studies over a wider parameter range to bring out the influence of the Arnold web on the DT process.
Trapped Ultracold Atoms
Another system wherein DT is expected to play a significant role is the optically trapped ultracold atoms [111,112] which can be usefully analyzed in terms of the Bose-Hubbard Hamiltonian (BHH).There is an interesting parallel between the BHH and the effective spectroscopic Hamiltonian of the form in Equation ( 6)-The number of sites (wells) in an optical trap and the number of particles on each site correspond with the number of vibrational modes and the excitation quanta of each mode, respectively.Thus, N particles trapped in a ( f + 1) site potential can be described by a f degrees of freedom Hamiltonian since the total particle number is conserved.The hopping terms in the BHH correspond to nonlinear resonances in the classical limit, which is approached for many trapped atoms since heff ∼N −1 .For the 2-site BHH studies have shown that one can predict and experimentally observe [113] interesting phases such as the macroscopic quantum self-trapping (MQST) phase by analyzing the classical limit Hamiltonian [114].In particular, MQST arises due to the interplay between the hopping (tunneling) and the interaction strengths.Wüster et al. have shown [115] that MQST also emerges in the context of dynamical tunneling of a driven Bose-Einstein condensate in a single well.It is, therefore, interesting to ask if other novel phases can emerge in multi-site BHH models and if the existence of such phases can be correlated with the features on the Arnold web.
Clearly, the first requirement of addressing the question above is to construct the Arnold web, and a minimal model is a 4-site system.Recently [51] such a system was analyzed where the BHH H = H T + H M was considered with and The model above is taken from the work [116] of Khripkov, Cohen, and Vardi.The site energies are denoted by U, and K, K c are the hopping amplitudes.Essentially, as indicated in Figure 8 inset, H T describes a 3-site linear trimer coupled to a monomer via H M .Please note that for K c = 0 the monomer decouples from the system and X ≡ n 1 + n 2 + n 3 is a conserved quantity.On the other hand, for K c ̸ = 0, the conservation of X is violated, but the total particle number N ≡ X + n 0 is conserved.Thus, the eigenstates of the full Hamiltonian can be expressed as a linear combination of the Fock states |n; N⟩ ≡ |n 1 , n 2 , n 3 ; N⟩.An aspect of interest for such bipartite models is to compare classical versus quantum thermalization [117] triggered by a weak monomer coupling.For instance, Figure 8 shows the extent to which eigenstates of the trimer are delocalized in the X direction since [H, X] ̸ = 0 for finite values of K c .Three example trimer eigenstates are shown, and it is clear that the spreading in the X direction can be extensive for certain states.The question is whether this spreading in X is entirely due to DT or whether there is some classical contribution as well.To address this issue, one can study the dynamics of specific initial states |n; N⟩ for K c ̸ = 0, particularly those that contribute dominantly to the trimer eigenstate spreading seen in Figure 8.
Among the several initial states studied in a recent work [19], we illustrate the dynamics of the state |16, 0, 8; 40⟩.This state is chosen since it is representative of the class of states for which the dynamics have both classical and quantum contributions.As seen from Figure 9, for K c = 0, the state is localized and not affected by the monomer.For finite K c , the trimer is perturbed by the monomer, and the quantum survival probability decays, exhibiting multiple timescales.The shortest timescale in Figure 10a, of Kt∼0.5 shows coherent oscillations involving the initial state and two other states due to the a 1 a † 0 hopping term.The analogous classical computations shown in Figure 10b indicate that there is a flow to the states corresponding to Figure 10a, albeit on a longer timescale.
On the other hand, Figure 10c shows that the longer timescale of Kt∼100 seen in Figure 9 for K c ̸ = 0 correlates with significant population flow into multiple number of states.However, as Figure 10d shows, there is no classical probability flow to the states in Figure 10c, even on fairly long timescales.Thus, Figures 10b,c represent classes of states that are connected and not connected by the classical flow, respectively.It is also worth noting that while the quantum dynamics exhibits coherent oscillations over a timescale of Kt∼500, the classical dynamics "thermalizes" by Kt∼20.Thus, there is a distinct difference between the classical and quantum dynamics of the initial state of interest.Understanding the results shown in Figure 10 requires a careful study of the classical dynamics.As before, using the Heisenberg correspondence, the classical Hamiltonian can be expressed as H(J, θ) = H 0 (J) + V(J, θ) (13) with H 0 (J) ≡ U ∑ 3 j=0 J 2 j /2 and where we have denoted θ kl ≡ θ k − θ l .Using the zeroth-order nonlinear frequencies Equation ( 2), the five primary resonances of the above Hamiltonian can be determined along with their projections on a specific set of action planes of interest.For instance, in (I 2 , I 3 ) space the three trimer-monomer resonances (denoted R Mk ) can be expressed as whereas the two resonances within the trimer subspace are In the above X c ≡ ∑ 3 k=1 J k and N ≡ J 0 + X c being the classical analog of the quantum total particle number N. The expectation is that if the initial state is in the vicinity of the junctions formed by the intersection of R Mk and R Tk , then substantial perturbation of the trimer dynamics can occur for K c ̸ = 0.Moreover, several RAT pathways can open up at the junction and result in the multiple timescales seen in Figure 9.
To confirm the above "suspicion", we construct the Arnold web for specific (X c , N ) using the FLI technique.Computations show that varying the trimer population X c for the fixed total number of particles reveals the Arnold web [19] structure changing in terms of the type and number of resonance junctions.A typical web with several junctions is shown in Figure 11a for X = 36 and N = 40.In the context of the dynamics shown in Figures 9 and 10, the relevant portion of the Arnold web in (J 1 , J 0 ) space is shown in Figure 11b along with the location of the initial state.The quantum dynamics of the initial state are shown in terms of the probability flow through the quantum number space up to a maximum time Kt∼1000.A few observations can be made at this stage.First, the initial state is located within the R M1 resonance.Consequently, the population can resonantly transfer between the monomer and site 1 of the trimer.Second, significant delocalization can be observed around the resonance junction.However, on this timescale, it is clearly non-uniform-the probability of population transfer to the monomer is larger.Thirdly, the dynamics for the various states shown in Figure 10b,c are clearly identified on the Arnold web and confirm the role of the resonance junction.Specifically, the states connected by double arrows in Figure 11b are precisely the ones that are involved in DT in Figure 10c.Thus, based on the observed dynamics near the junction, a possible dominant path that connects the states |n 1 , n 2 , n 3 ; N⟩ = |16, 0, 8; 40⟩ with the state |8, 0, 16; 40⟩ is as follows which can be directly correlated with Figure 11b.It can be shown (argued) that the first (last) of the DT paths occur due to RAT involving a third (sixth) order resonance induced at the resonance junction.At this juncture, it is useful to recollect the previous discussion on the issue with a maximal order choice within the Nekhoroshev approach.Clearly, the DT timescales are sensitive to fairly high-order resonances).Hence, the probability flow in Figure 10c is a clear f = 3 effect.One may argue that the initial and final states can be connected simply by the R T2 resonance.However, this is not very probable since the process involves eight particle exchanges between sites one and three of the trimer.Moreover, if that were to be the case, then the survival probability in Figure 9 would have decayed even for K c = 0. Again, the above example is a hint at the possible effect of the resonance junctions on DT.Much more can be learned from this model by looking at wider parameter regimes.A start has been made in the recent work [19], and it would be interesting to study aspects of thermalization in such systems [118] due to the presence of the resonance junctions.Please note that in the context of unimolecular decay reactions, there is [68] already a strong connection between the junctions and non-statistical dynamics.
Final Thoughts
This review has attempted to highlight the complexity of studying DT in systems with three or more degrees of freedom.Although a fair amount of progress has happened over the past decade, there are still several questions that remain unanswered.Here is a partial list of questions: 1.
Almost all the examples shown here suffer from one key issue.There is simply no accurate estimate of classical stability times and their comparison to the DT timescales.Moreover, a careful study of the DT process by scaling the effective h needs to be done.
In this regard, it may be worthwhile to study Martens' model from the stochastic pumping (or three-resonance) model perspective.
2.
For mixed regular-chaotic phase spaces in f = 2, a combination of RAT and CAT is operative.Models combining the nonlinear resonances and random matrix theory have been relatively successful in understanding tunneling splittings.For f ≥ 3, the local chaos near the junctions may not be amenable to a random matrix approach.How does one account for the role of CAT, if relevant, near junctions?3.
The focus, understandably so, has been on f = 3 systems.What about f > 3 systems?Higher rank junctions are now possible.Moreover, the argument [80] that quantum Arnold diffusion may delocalized in analogy with the transport along disordered wires is no longer valid.Similarly, whether the destruction of quantum localization on the Arnold web due to classical drift [87] holds in the presence of higher rank junctions is not clear at the present moment.Already for f = 3, the results in Figures 7 and 10b seem to suggest a stronger Nekhoroshev stability for the quantum dynamics.Of course, one needs to ask: is there a "quantum" Nekhoroshev theorem?Some subtle issues in this regard have been outlined in the paper by Fontanari et al. [119].4.
Much of the arguments invoking the Nekhoroshev exponential stability need modification when the quasi-convexity or steepness assumptions are violated.In such instances, one can have fast transport on the Arnold web.Does this then invalidate the notion of DT in such systems?Even for such systems, are there phase space regions that are classically disconnected over physically relevant timescales?In an impressive study, Pittman, Tannenbaum, and Heller have [50] made a start in terms of non-convex model Hamiltonians.In fact, and relevant to the previous point, they studied DT in systems with f = 3, 4, and 5 and argued that DT can be faster than the fast classical transport and hint at mechanisms different from RAT.However, certain coupling schemes can result in comparable timescales for classical and quantum transport.More extensive studies on this and other such models would yield important insights.
The list (admittedly partial) of questions above indicates that our understanding of DT in f ≥ 3 systems is still in its infancy.However, answers to the questions are expected to shed light on issues ranging from IVR in polyatomic molecules to thermalization in interacting many body systems.
Funding: This research received no external funding.
24 THEFigure 1 . 4 Figure 1 .
Figure 1.4 Sketch of the resonance network i.e., Arnol'd web for a three mode system.The resonances, with varying thickness representing varying strengths, form an intricate network over which dynamics of specific ZOBS (circles) occurs.Possible barriers to the transport are indicated by dashed lines.Note the "hubs" in the network corresponding to the intersection of several low and high order resonances.Compare to the state space picture shown in the earlier figures.
Figure 2 .
Figure 2. Inclusion of resonances to a certain maximal order.Defining the single, double, and no-resonance domains in Nekhoroshev theory.Fast drift (gray double arrows) occurs transversely to the individual resonances.Exponentially slow Arnold diffusion (thick blue arrow) can occur along the resonance.Figure adapted with permission from the PhD thesis [92] of S. Karmakar, which is based on the figure in [93].
O R T S on March 23, 2010 www.sciencemag.orgDownloaded from
Fig. 2 .
Fig. 2. Evolution of the Arnold web for increasing values of the perturbation parameter.The lowest values of the FLI appear in black and they correspond to the resonant islands of the Arnold web; the highest values appear in yellow and they correspond either to chaotic motion rising at the crossing nodes of resonant lines or to the presence of separatrix.The FLIs of all the KAM tori have about the same value, and therefore they appear with the same purple color.The choice of the color scale is suited to the value of the perturbation parameter and to the integration time.(Left column) A large portion of the action plane.Top: ε ϭ 0.001, t ϭ 1000; middle: ε ϭ 0.01, t ϭ 1000; bottom: ε ϭ 0.04, t ϭ 1000.(Right column) Enlargement of the figures on the left obtained with a large integration time in order to see smaller details.Top: ε ϭ 0.001, t ϭ 4000; middle: ε ϭ 0.01, t ϭ 2000; bottom: ε ϭ 0.04, t ϭ 2000.
Fig. 2 .Figure 3 .
Fig. 2. Evolution of the Arnold web for increasing values of the perturbation parameter.The lowest values of the FLI appear in black and they correspond to the resonant islands of the Arnold web; the highest values appear in yellow and they correspond either to chaotic motion rising at the crossing nodes of resonant lines or to the presence of separatrix.The FLIs of all the KAM tori have about the same value, and therefore they appear with the same purple color.The choice of the color scale is suited to the value of the perturbation parameter and to the integration time.(Left column) A large portion of the action plane.Top: ε ϭ 0.001, t ϭ 1000; middle: ε ϭ 0.01, t ϭ 1000; bottom: ε ϭ 0.04, t ϭ 1000.(Right column) Enlargement of the figures on the left obtained with a large integration time in order to see smaller details.Top: ε ϭ 0.001, t ϭ 4000; middle: ε ϭ 0.01, t ϭ 2000; bottom: ε ϭ 0.04, t ϭ 2000.
Fig. 3. 4 :
Fig. 3.4: Location of the resonances as a function of energy and one of the frequency Ê 2 keeping other frequencies Ê 1 and Ê 3 fixed.The values of anharmonicity parameters i 's are given in Table 3.1.
Figure 4 .
Figure 4. Zeroth-order Arnold web prediction for the model Hamiltonian Equation (9) with varying total energy E and harmonic frequency ω 2 of the second mode.The other two mode frequencies are fixed at ω 1 = 1.1 and ω 3 = 0.9.Please note that for every choice of ω 2 (a given panel), the resonances at three energies E = 20 (small blue circle), E = 30 (medium red circle), and E = 40 (large black circle) are shown.If a particular color line is missing, it implies that the corresponding resonance does not appear at that energy.Figure taken with permission from the PhD thesis [92] of S. Karmakar.
Figure 5 .
Figure 5. Arnold web for the model Hamiltonian Equation (9) at total energy E ≈ 40 constructed using the FLI technique.The initial angle slice is (π/2, π/2, π/2) and the resonant coupling strengths are taken as [τ 1 , τ 2 , τ 3 ] = [5, 1, 5] × 10 −5 .The FLI scale is shown with FLI values greater than 3.7, indicating chaotic regions (in yellow), while the lowest FLI value (in blue) highlights the resonance zone.Two prominent junctions labeled A and B can be seen.The zeroth-order prediction of the resonance center lines is indicated in purple.Another junction, C, arises out of the intersection of higher order and induces resonances.(Inset) An enlarged plot of the region near junction A is shown.The FLI scale is the same as in the main plot.Figure adapted from [51].
Figure 6 .
Figure 6.Classical (solid line) and quantum (dashed line) survival probabilities of the initial state |22, 1, 19⟩ (black) and the state |20, 2, 19⟩ (red).The location of the two states on the Arnold web is shown in the right panel.Parameters as in Figure 5 and the projected classical flow of the initial state are shown in green.Please note that the zeroth-order energies of the two states are E 0 22,1,19 ≈ 40.28 and E 0 20,2,19 ≈ 40.27. Figure adapted from [51].
Fig. 3. 10 :
Fig. 3.10: Classical cross survival probabilities different coupling strengths [-1 , -2 , -[50, 5, 50].The corresponding quant The legend in (a) indicates the state initial ZOBS (arrow), classical dynam time T = 10000), and the location of t on the Arnold web.Note that states w are shown as green dots and the stat mechanically are shown as red dots.
Figure 8 .
Figure 8.The distribution of X = n 1 + n 2 + n 3 at time Kt = 1000 for three example trimer eigenstates upon coupling the monomer.The selected trimer eigenstate belongs to the X = 25 (blue squares), X = 24 (red triangles), and X = 23 (green circles) manifold.(Inset) A schematic of the four-site Bose-Hubbard model with the site numbering used in the text.The value of the parameters used are U = 0.5, K = 0.1, K c = 0.05, and N = 40.Figure adapted from [19].
Figure 11 .
Figure 11.(a) An example Arnold web for (X, N) = (36, 40) mapped using the FLI technique.The yellow regions represent chaos.Based on the zeroth-order predictions, the trimer subspace resonance R Tk centers are shown in red, and the monomer-trimer resonance R Mk centers are shown in cyan, white, and purple colors.(b) A close-up of the resonance junction in (J 1 , J 0 ) space formed due to the intersection of R M1 and R T2 resonances.The initial Fock state |n 0 , n 1 , n 2 ; N⟩ = |16, 16, 0; 40⟩ (same as the state |n 1 , n 2 , n 3 ; N⟩ = |16, 0, 8; 40⟩) is shown as a black dot.The quantum probability flow to different participating Fock states at times Kt = 0, 10, 20, . . ., 1000 are shown as yellow circles with radius ∝ probability.The white arrows connecting a pair of states correspond to classically forbidden but quantum mechanically allowed processes.Figure adapted from [19].
|
v3-fos-license
|
2021-03-16T05:41:12.994Z
|
2021-02-03T00:00:00.000
|
232228611
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.7143",
"pdf_hash": "16b51a805fafd6b3f2bacfc8c7370b997f3a542f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:824",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "16b51a805fafd6b3f2bacfc8c7370b997f3a542f",
"year": 2021
}
|
pes2o/s2orc
|
Can effective population size estimates be used to monitor population trends of woodland bats? A case study of Myotis bechsteinii
Abstract Molecular approaches to calculate effective population size estimates (Ne) are increasingly used as an alternative to long‐term demographic monitoring of wildlife populations. However, the complex ecology of most long‐lived species and the consequent uncertainties in model assumptions means that effective population size estimates are often imprecise. Although methods exist to incorporate age structure into Ne estimations for long‐lived species with overlapping generations, they are rarely used owing to the lack of relevant information for most wild populations. Here, we performed a case study on an elusive woodland bat, Myotis bechsteinii, to compare the use of the parentage assignment Ne estimator (EPA) with the more commonly used linkage disequilibrium (LD) Ne estimator in detecting long‐term population trends, and assessed the impacts of deploying different overall sample sizes. We used genotypic data from a previously published study, and simulated 48 contrasting demographic scenarios over 150 years using the life history characteristics of this species The LD method strongly outperformed the EPA method. As expected, smaller sample sizes resulted in a reduced ability to detect population trends. Nevertheless, even the smallest sample size tested (n = 30) could detect important changes (60%–80% decline) with the LD method. These results demonstrate that genetic approaches can be an effective way to monitor long‐lived species, such as bats, provided that they are undertaken over multiple decades.
. These molecular approaches along with other indicators of genetic diversity are increasingly being considered to meet global conservation goals (Hoban et al., 2020;IUCN -World Conservation Congress, 2020) and have the potential to supplement or replace traditional long-term monitoring methods.
Ne is an important parameter because it determines the rate of loss of genetic variability and the rate of increase in inbreeding in a population. Because it is determined both by demographic and genetic processes, it gives an indication of the species' ability to respond to environmental change which cannot be determined from census data alone. For iteroparous species with overlapping generations and/or those with long generation times, the effective number of breeders in a population within a given breeding season (Nb) has been proposed as refinement of Ne and, by including data from a given reproductive bout rather than a generation, is more readily estimated and may give a better short-term index of a population's current genetic health (Ferchaud et al., 2016;Ruzzante et al., 2016;Waples, 1989Waples, , 2005Whiteley et al., 2017).
Because associations are expected between Ne (or Nb) and the censused population size (Nc), genetic markers can potentially be used to assess changes in populations of vulnerable or exploited species . However, the relationships between Ne and Nc are not straightforward (Luikart et al., 2010;Palstra & Fraser, 2012;Pierson et al., 2018). Ne can be defined as the size of an ideal population experiencing genetic drift at the same rate as the observed population (Fisher, 1930;Wright, 1931), but these ideal populations are based on simplified assumptions (e.g., random mating and stable size) that are usually violated in wild populations due to factors such as variable survival, fecundity, or complex mating systems.
Several approaches-using single or multiple sampling-can be applied to estimate Ne. The ease of using a one-off sampling event means that it is the most widely applied method for assessing populations of wildlife. Techniques to derive Ne from single samples, such as those based on linkage disequilibrium (LD; Hill, 1981;Waples & Do, 2008), molecular coancestry (Nomura, 2008), and excess of heterozygosity (Pudovkin et al., 1996), all provide an assessment of Ne for a given time-point. However, point estimates of Ne in itself are of limited value for the long-term monitoring of populations, because i) the relationship with true population size is frequently highly uncertain and ii) conservation management usually requires information on temporal changes in conservation status rather than a single snapshot (Pierson et al., 2018;Schwartz et al., 2007). Temporally spaced genetic sampling schemes have the potential to generate important information for conservation, but examples are generally restricted to commercially valuable species (Bruford et al., 2017).
Most techniques used to calculate Ne, such as the LD method (Hill, 1981;Waples & Do, 2008), assume discrete generations.
Provided the interval between sampling events is longer than the interval between generations, the LD method can be used to detect population trends on species exhibiting a iteroparous reproductive strategy . However, for many long-lived species, such as mammals, the approach is rarely applied because it would require multiple decades of data (Kamath et al., 2015;Pierson et al., 2018). Long-term studies where both demographic and genetic materials have regularly been collected for mammals are uncommon, and rarely exceed 10 years. Yet, a study of grizzly bears (Ursus arctos) in the Greater Yellowstone Ecosystem has shown that the added information on age structure via long-term genetic monitoring can help detect population changes over time (Kamath et al., 2015).
Bats, being small, nocturnal, volant, and long-lived animals that regularly survive more than 20 years in the wild, are particularly hard to monitor using traditional methods (Gaisler et al., 2003). It is potentially possible to obtain reliable population estimates for some species through census counts at accessible roosting sites. However, this is possible for only a minority of species, because many bat species, and particularly tree-roosting bats, perform fission-fusion behavior (Kerth & Konig, 1999), meaning that census counts are highly unreliable as a significant proportion of the population may be missing. Over half of the world's 1,300+ bat species have unknown population trends (Frick et al., 2019), and so there is considerable interest in the contribution that could be made by genetic monitoring approaches to fill this data gap. For tree-roosting bats, an even higher proportion of species could potentially benefit, as there is no effective methodology available based on traditional monitoring methods; trends must instead be inferred solely from the state of their habitat.
The Bechstein's bat (Myotis bechsteinii) is a woodland specialist, widespread throughout Europe but with a distribution linked to the presence of old growth oak and beech woodlands (Dietz & Pir, 2011;Vergari et al., 1998). Its habitat is thought to have deteriorated sufficiently to permit an inference to be made of a 30% population decline over a 15-year period, with the expectation that the decline will continue (Paunović, 2016). It is consequently classified as Near Threatened by the IUCN and "in need of strict protection" by the European Habitats Directive (92/43/ CEE) (Paunović, 2016).
Until recently, information on the age of individuals was entirely dependent on long-term banding schemes (Munshi-South & Wilkinson, 2010). Novel techniques involving the measure of DNA methylation at specific CpG sites have proven to give reliable age estimates of bats, such as M. bechsteinii . Such methods can therefore provide rapid information on the age structure of a population, which could be used to generate more precise calculations of Ne at regular sampling intervals. Here, we ap-
| Study sites and sample collection
All M. bechsteinii genotypes were obtained from . This dataset includes genotypes from 260 individuals using 14 microsatellite loci collected at 8 sites. For this study, we only used genotypes from sites in Britain as the simulations were performed on a single British colony/site and there is evidence of genetic structure between mainland Europe and Britain.
| Computer simulations
We simulated population changes over a 200-year period (or breeding cycles) using the forward-time, individual-based simulator simuPOP (Peng & Amos, 2008;Peng & Kimmel, 2005). Populations from year (breeding cycles) 1 to 50 remained constant and were discarded from further analyses as these were treated as a burn-in period in order to adjust the provided genotypes with the initial conditions, and then, all population may provide more precise estimates, it is biologically unlikely in a species that is known to mix at swarming sites. By accounting for gene flow, the local population would not become isolated from the rest of the base/national population over time. In our simulations, gene flow was adjusted based on expert knowledge and was predominantly governed by males (local population to base population male migration rate = 0.1 and 0.05 the other way), as females live in closed maternity colonies (female migration rate = 0.001). The initial local population size was adjusted at 130 (double the number of female genotypes collected from the maternity colony) with a 1:1 sex ratio to account for both solitary males and the maternity colony and to 520 individuals (double the size of the initial dataset) for the base population ( Figure 1).
| Effective population size estimates
All effective population size estimates were calculated for the 48 simulations using the 5-year interval outputs from year 50 to 200 (as the first 50 years were discarded because they were considered a burn-in period). Estimates were performed using all genotypes of the population for each interval, then 50 and 30 genotypes to account for variation in sample size. Effective population size estimates were first calculated based on the linkage disequilibrium (LD) method using NeEstimator v2, which assumes discrete nonoverlapping generations (Do et al., 2014). A random mating system and a critical threshold for the lowest allele frequency of 0.02 were used for all calculations. As a refinement, we also assessed the impacts of including only individuals from within the same cohort (i.e., reproductive cycle, defined loosely as animals aged 0-3 years, this being the limit of precision for aging the species). This removes the variability otherwise introduced by sampling overlapping generations, and gives an estimate of Nb . Then, we used the software Age Structure to provide estimates using a parentage assignment estimator (EPA; Wang et al., 2010). Unlike the linkage disequilibrium approach, the EPA method incorporates information on age and sex along with individual genotypes of individuals from a population. All estimates assumed the same life history traits as F I G U R E 1 Diagram summarizing the initial settings of the simulations performed on M. bechsteinii genotypes. Arrows indicate migration rates those included in the simulations, along with a 0.5 probability of including a parent in the dataset.
To clarify, the EPA method uses extra information (age and sex) to provide direct estimates of Ne. In contrast, the LD method, if applied to random samples including a number of consecutive cohorts roughly equal to the generation length, should provide estimates of per-generation Ne, but these results depend on the sampled age structure (e.g., whether samples are in proportion to the age structure) (Robinson & Moyer, 2013;Waples & Do, 2010). Within single cohorts, the LD method estimates Nb and is expected to apply to a shorter period (3 years), whereas the EPA method estimates the harmonic mean Ne across the generation span prior to the samples being collected (Wang et al., 2010). In our simulations, individuals were sampled randomly and the population age structure did not vary considerably over the simulations, so we expect that the LD estimates will represent a consistent estimate throughout all simulations.
| Statistical analyses
The correlation between population size and effective population size estimates, along with confidence intervals, was assessed using normalized cross-correlations (NCC). Any infinite values were changed to a tenfold increase of the initial population size prior to analysis, to permit them to be included in the final analysis. NCC scores ranged from −1 to 1 (perfect correlation). To assess whether Ne changes were detected immediately or were slightly delayed, we also calculated NCC with time shifts at T1 and T2 (5 and 10 years after sampling, respectively). Correlation scores and confidence interval widths were then compared between both methods with different sample sizes using a two-way ANOVA. We also calculated Ne/Nc ratios, and standardized population sizes and Ne estimates to a mean of 0 with a standard deviation of 1. Then, we used Bland-Altman plots to assess and compare the precision of Ne estimates for each method with the known population size (Nc).
| Detecting population declines
For the method showing the greatest ability to identify population trends, we tested its performance by measuring (a) the ability to detect population declines and (b) the frequency with which a decline was detected in a stable population (false detection). For this, we focused on sudden population declines because the detection of catastrophic events often requires immediate conservation action.
We ran 20 simulations of sudden population size declines of 20% occurring at breeding cycle 100. This was repeated for 40%, 60%, and 80% declines. Ne estimates were calculated at 25 years before
| RE SULTS
We found that NCC scores between true and effective population sizes did not vary according to the timing of sampling (Time Shift F (2,999) = 0.535; p = .59, 3-way ANOVA; Supplementary S3). We therefore did not perform further analyses on time periods T1 and T2 as our results suggest that population changes were detected immediately at T0. However, method, sample size, and the interaction between these factors had a significant effect on NCC scores We tested the ability to detect population declines using the LD method as it outperformed the EPA method. Our results showed that 60% and 80% population declines are detected at least 75% of the time after the bottleneck for the following 25 years, with a maximum detection rate at 98.8% when an 80% decline occurs (Table 1 and Supplementary S8 and S9). On the other hand, 20% declines F I G U R E 2 Box plots of normalized cross correlation scores (a score of 1 reflects a perfect correlation) between population size and Ne estimates time series based on the LD and EPA method using different sample sizes for all simulations at a colony F I G U R E 3 Results from three different simulations (Simulation 21: a, b; Simulation 19: c, b; Simulation 35: e, f) for the LD (left column) and EPA method (right column). The known population size is plotted in black, while estimates using different numbers of samples are in color F I G U R E 4 (a) Relationship between Ne/N ratio and population size using the LD method at a local scale with varying sample sizes. (b) Bland-Altman plot with 95% confidence limits of the mean difference using standardized values for N and Ne estimates using the LD method. In both figures, red squares represent a sample size of 30 (CI: dotted), blue triangles 50 samples (CI: two dashed), and green circles (CI: dashed) were used for all samples. A plot evenly scattered above and below zero suggests that there is no consistent bias in the methods used were almost never detected, and 40% declines were detected approximately 50% of the time. The percentage of false declines detected in a stable population 25 years prior to the bottleneck was similar when analyses used either all samples (17.6%) or 50 samples (17.4%), but increased when using 30 samples (29.5%).
| D ISCUSS I ON
Genetic monitoring has the potential to provide valuable information on the population trends of elusive species. While such methods are commonly applied in fisheries (e.g., Christie et al., 2012), they are rarely used with long-lived wild mammals and birds. Recent advances in molecular aging techniques have provided the opportunity to test whether models incorporating additional information on population age and sex structure would improve the precision of Ne estimates.
By using genotypic data and forward-time simulations, we were able to compare the ability of alternative sampling strategies to detect population trends of a rare bat species, M. bechsteinii.
Despite including additional information on population age and sex structure, the EPA method (Wang et al., 2010), was strongly outperformed by the more commonly used LD method when using the same sample size. While the single-sample LD method has been used and tested on numerous species (e.g., Murphy et al., 2018;Waples et al., 2018), the EPA method developed by Wang et al. (2010) has not received as much attention. Work on grizzly bears by Kamath et al. (2015) in Yellowstone indicated that despite providing similar results as other methods, the EPA was more sensitive to a decrease in sample size. Sampling schemes that favor the collection of closely related individuals may cause underestimates; and errors in age estimates may lead to increased variability (Wang et al., 2010). For the study of bats, such limitations may be important drawbacks as most samples per site are likely to originate from a maternity colony where all individuals are somewhat related, and age estimates are likely to be approximate. Furthermore, the requirement of additional information on populations (e.g., sampling proportions of age classes and sex) that is rarely available from wild populations may further reduce the precision of the Ne estimates as such information is not always available. The LD method, on the other hand, solely requires genotypes and is therefore less subject to such errors despite lacking any extra information on sex and age of individuals.
The importance of sample size for the LD method in detecting population declines was previously highlighted by Antao et al. (2011) through simulations, and in empirical work (Kamath et al., 2015). Our results show that increasing sample size clearly improved the capacity to detect population trends. Yet, large declines (60% -80%) could still be detected using the LD method despite reducing sample size.
For small declines, Waples and Do (2010) suggested that increasing the number of loci would have a similar effect as increased sample size. Single nucleotide polymorphisms (SNPs) provide a promising alternative to traditionally used microsatellites and many hundreds of SNPS can be readily identified across the genome. Yet, Antao et al. (2011), on the other hand, found that increased sample sizes are far better suited to detect rapid population declines.
In the case of age-structured populations, the grouping of several consecutive cohorts using the single-sample LD method can provide robust estimates of Ne, representative of the number of breeders (Nb) . We therefore assessed the impacts of repeatedly sampling only bats aged 0-3 years old.
However, our results indicate that estimates using this approach performed similarly to the LD method using 30 samples from individuals of all ages. Reduced precision using this method may be directly linked to variations in sample size as when bottlenecks occur, the number of juveniles is often reduced to very few individuals in a colony. Robinson and Moyer (2013) have previously reported that sampling only juveniles gives accurate estimates of Nb, but the best estimates of Ne are derived from sampling across the reproductively active population, particularly where reproductive success is skewed toward older age groups (as is the case with bats).
The Ne/Nc ratio is important for understanding the risk that demographic, environmental, and genetic factors have on the viability of populations, because Ne is usually smaller than the true population size (Palstra & Fraser, 2012). Yet, this relationship can be hard to assess as it can be affected by either habitat factors or population changes over time (Belmar-Lucero et al., 2012;Fraser et al., 2007).
Although the high ratios observed are likely to be a result of the sensitivity of the Ne estimation methods to small sample sizes, our work suggests that Ne/Nc ratios ranging from 0.24 to 0.78 are plausible for slow breeding mammals (Hoban et al., 2020), but there is little evidence to permit comparison of these results with other bat species. When using the LD method, we found that Ne/Nc ratios showed a log-linear relationship with N, which agrees with Palstra and Fraser (2012). In wild mammal populations, the short duration of most studies means that such trends remain unclear. For example, Kamath et al. (2015) found that Ne/Nc ratios remained constant while Pierson et al. (2018) highlighted the difficulties in finding any consistent trends over time. It is therefore generally recommended that Ne is used primarily as a metric to detect changes over time as opposed to assessing population size (Pierson et al., 2018). For IUCN red list assessments, this means that genetic data at present could only contribute toward criterion A (population size reduction based TA B L E 1 Percentage Ne estimates detecting a significant population decline after a bottleneck at breeding cycle 100 on Ne) and will not provide information on criterion C (small population size and decline) or D (very small or restricted population) as these depend on estimates of the known number of mature adults (IUCN, 2012).
| Conservation implications/applications
Appropriate planning of wildlife monitoring schemes is vital if they are to be robust and cost-efficient. Here, we summarize essential points that must be considered for the setup of a genetic monitoring program for M. bechsteinii, or any other woodland bat.
• The LD method appears more robust than the EPA method.
Although age structure may not be essential in the calculation of Ne, molecular aging techniques still have an important place in population monitoring as they may help in the detection of small declines (e.g., high proportion of old individuals) and enable estimates of Nb to be made for each cohort.
• Long-term monitoring with large sampling intervals (~ 5 years) should be prioritized. The detection of any trend in Ne for a longlived species with overlapping generations requires a long time series where sampling interval should be similar to the generation length of the species (R. S. Waples, personal communication, 2018). For this study, we used a sampling interval equivalent to the generation length of M. bechsteinii as defined by the IUCN.
• Sample size is a primary factor in determining the power of a monitoring program to detect population trends. into account age and sex, performed poorly, because it also requires additional information on the population that is not always available (e.g., probability of sampling a parent). The LD method, however, performed well and, in this study, is better suited for detecting population trends over time, if sample size is large enough. This study, using simulations over long periods of time, is the first to test the possibility of monitoring woodland bat population trends using molecular approaches and offers insights into the most appropriate sampling strategy.
ACK N OWLED G M ENTS
The research was funded by the Woodland Trust and the Vincent Wildlife Trust. FM is supported by a NERC KE Fellowship NE/ S006486/1 and the University of Sussex. We would like to thank Bo Peng for helping with the setup of the simulations. We also thank Bethany Smith and Domhnall Finch for their comments on the manuscript.
CO N FLI C T O F I NTE R E S T
The authors report no conflict of interest.
|
v3-fos-license
|
2017-04-13T14:57:29.560Z
|
2012-06-01T00:00:00.000
|
9940976
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CC0",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0038258&type=printable",
"pdf_hash": "6e90bd91bc256685044b8e9714f1e7d911198f09",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:825",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "6e90bd91bc256685044b8e9714f1e7d911198f09",
"year": 2012
}
|
pes2o/s2orc
|
Subcapsular Sinus Macrophage Fragmentation and CD169+ Bleb Acquisition by Closely Associated IL-17-Committed Innate-Like Lymphocytes
Subcapsular sinus macrophages (SSMs) in lymph nodes are rapidly exposed to antigens arriving in afferent lymph and have a role in their capture and display to B cells. In tissue sections SSMs exhibit long cellular processes and express high amounts of CD169. Here, we show that many of the cells present in lymph node cell suspensions that stain for CD169 are not macrophages but lymphocytes that have acquired SSM-derived membrane blebs. The CD169 bleb+ lymphocytes are enriched for IL-17 committed IL-7RαhiCCR6+ T cells and NK cells. In addition, the CD169 staining detected on small numbers of CD11chi dendritic cells is frequently associated with membrane blebs. Counter intuitively the CD169 bleb+ lymphocytes are mostly CD4 and CD8 negative whereas many SSMs express CD4. In situ, many IL-7Rαhi cells are present at the subcapsular sinus and interfollicular regions and migrate in close association with CD169+ macrophages. These findings suggest SSMs undergo fragmentation during tissue preparation and release blebs that are acquired by closely associated cells. They also suggest an intimate crosstalk between SSMs and IL-17 committed innate-like lymphocytes that may help provide early protection of the lymph node against lymph-borne invaders.
Introduction
Subcapsular sinus macrophages (SSMs) are a unique subset of lymph node macrophages that form a dense layer overlapping with the lymphatic lining that separates the lymphatic sinus and B cell follicle. In situ staining has shown that SSMs express high amounts of the sialic acid binding Ig-like lectin 1 (Siglec1 or CD169) and the integrin CD11b (Mac1), and, in contrast to their counterparts in the medulla, lack expression of the macrophage marker F4/80 [1,2,3,4]. Many SSMs straddle the lymphatic lining cells at the base of the subcapsular sinus, extending a ''head'' into the sinus and long cellular processes (or ''tails'') into the adjacent B cell follicle [1]. In contrast to the dynamic behavior of dendritic cell processes [5,6,7], real time imaging studies have revealed that the long cellular processes of SSMs are relatively static, potentially indicating tight adhesion to adjacent stromal cells or extracellular matrix [1,2,3,4].
Due to this unique localization, SSMs are poised to rapidly encounter pathogens and antigens that reach the lymph node via the lymph. Indeed, a number of studies have revealed that SSMs have the capacity to capture a range of antigens, including viral particles, immune complexes, antigen-loaded beads, and other opsonized antigens [8]. In contrast to classical macrophages, which typically internalize and degrade antigen, SSMs are thought to be poorly phagocytic [9,10], a property that may contribute to their capacity to function as antigen-presenting cells for B cells. Antigen captured by SSMs is displayed on macrophage ''tails'' that extend into the B cell follicle, where B cells can directly acquire antigen via complement or B cell receptors [1,2,3,4]. SSMs have also been shown to activate iNKT cells. Subcutaneously injected a-GalCer-coated microspheres were captured by SSMs, processed, and presented via CD1d to iNKT cells [11]. In addition to these antigen presentation functions, a number of recent studies have shown that SSMs are an early site of replication for a number of viruses [12][13] as well as the parasite Toxoplasma gondii [14]. SSMs have been suggested to be unusually permissive for pathogen infection, a property that might foster local production of cytokines that protect other cell types from infection [15].
As part of a continued effort to understand the biology of SSMs, attempts to isolate and study these CD169 hi cells have been made by us [2] and others [3,15,16,17,18]. Here we report that many of the cells in lymph node cell suspensions that stain for CD169 are lymphocytes that have acquired SSM-derived membrane blebs and thus masquerade as CD169 hi SSMs during flow cytometric analysis. This acquisition does not appear to be a random process as CD169 bleb + lymphocytes are enriched for IL-17 committed IL-7Ra hi CCR6 + T cells and NK cells. Moreover, in situ and realtime imaging analysis reveals that IL-7Ra hi CXCR6 hi cells are present at the subcapsular sinus and in interfollicular regions, often migrating over the membrane processes of CD169 + macrophages. These observations raise the possibility of crosstalk between SSMs and innate-like lymphocyte populations that may have important roles in the early phases of lymph node immune responses.
Results
Detection of CD169 + staining on IL-7Ra hi CCR6 + lymphocytes In situ staining has revealed that SSMs are CD169 hi CD11b + CD11c lo cells that, in contrast to the medullary macrophages, are F4/80 2 [1,2,3,4]. Previously, we identified a population of CD169 hi CD11b + CD11c lo F4/80 2 cells by flow cytometry that appeared to correspond to SSMs [2]. In further efforts to characterize CD169 hi CD11c lo cells by flow cytometry, we observed that many of the cells with a CD169 hi CD11c lo phenotype also expressed IL-7Ra ( Figure 1A). However, subsequent analysis of IL-7Ra expression in tissue sections revealed that SSMs were IL-7Ra 2 . Instead, we observed a population of IL-7Ra hi cells, with a rounded, lymphocyte-like morphology, which were present at the subcapsular sinus and interfollicular regions of the lymph node ( [19] and Figure 1B and Supplemental Figure S1). These cells expressed higher amounts of IL-7Ra than the bulk population of T zone T cells and were often closely associated with the macrophages ( Figure 1B). In some cases, CD169 + SSM ''arms'' appeared to be wrapped tightly around the IL-7Ra hi cells ( Figure 1B, far right bottom panel), although the resolution of the imaging analysis could not exclude the possibility that some IL-7Ra hi cells were expressing CD169.
These results suggested that IL-7Ra hi lymphocytes that stained positively for CD169 in flow cytometric analysis might have been contaminating the SSM gate. Further analysis of the IL-7Ra hi CD169 hi CD11c lo cells by flow cytometry revealed that the majority were CCR6 + ( Figure 1C). When we gated on IL-7Ra hi CCR6 + cells from total lymph node cells, we observed that this gate included both CD3e + T lymphocytes and CD3e 2 non-T cells. Moreover, there was a ''smear'' of CD169 staining spanning almost a 2-log range on a fraction of both the IL-7Ra hi CCR6 + CD3e + T lymphocytes and CD3e 2 non-T cells ( Figure 1D, top panels). CD3e + IL-7Ra hi CCR6 + cells were absent in a TCRbddeficient mouse, confirming that a subset of the IL-7Ra hi CCR6 + cells were T lymphocytes ( Figure 1D).
To exclude the possibility that the CD169 antibody Ser4 was cross-reacting with a non-CD169 epitope on IL-7Ra hi CCR6 + cells, we stained lymph node cell suspensions with 3D6, an antibody that recognizes a distinct epitope on CD169 [20]. IL-7Ra hi CCR6 + lymphocytes that bound Ser4 also stained with 3D6 ( Figure 2A), suggesting that CD169 was present on the lymphocytes. As a further test that these cells were specifically staining for CD169, we utilized CD169-DTR mice, in which the diptheria toxin receptor (DTR) is knocked into the Siglec1 locus. Intraperitoneal administration of diphtheria toxin (DT) causes ablation of CD169-expressing cells, including the SSMs in these mice [16,21]. Following DT treatment, there was a loss of CD169 + cells by flow cytometry, including CCR6 + CD169 + cells ( Figure 2B), indicating that CD169 staining on IL-7Ra hi CCR6 + lymphocytes is specific.
However, when we measured CD169 transcripts on sorted CD169 + and CD169 2 IL-7Ra hi CCR6 + lymphocytes, we detected low levels of Siglec1 mRNA in both the CD169 + and CD169 2 fraction ( Figure 2C). In contrast, Siglec1 mRNA was abundant in sorted CD169 + CD11c lo F4/80 + cells (i.e. cells that stain positive for medullary sinus macrophage markers) ( Figure 2C). These data do not exclude the possibility that IL-7Ra hi CCR6 + cells do intrinsically express low levels of Siglec1 mRNA. However given the low abundance of mRNA detected in both CD169 + and CD169 2 IL-7Ra hi CCR6 + lymphocytes, despite more than a 10-fold difference in CD169 staining by flow cytometry, we wondered whether these cells were acquiring CD169 in trans from other cells.
These combined data led us to hypothesize that IL-7Ra hi CCR6 + lymphocytes acquire CD169 + macrophage-derived membrane fragments or ''blebs''. Consistent with this possibility, we observed that CD169 staining intensity on IL-7Ra hi lymphocytes correlated with other markers expressed by SSMs based on in situ staining, such as CD11b ( Figure 2F). To explore the possibility that these cells acquire CD169 + blebs, CD169 + IL-7Ra hi CCR6 + lymphocytes were sorted and fixed to slides to visualize CD169 staining. CD169-staining membrane blebs were observed attached to the surface of IL-7Ra hi CCR6 + lymphocytes ( Figure 2G). Consistent with the range in CD169 staining intensity observed by FACS, we noted significant variation in the number and size of blebs attached to each cell. In some cases the amount of CD169 surface staining was extensive. However, even in many of these cases, the marker did not appear uniformly distributed across the plasma membrane, suggesting that the staining was associated with a macrophage-derived membrane process that encompassed a large part of the lymphocyte surface ( Figure 2G, far right panel).
To assess CD169 + bleb acquisition by IL-7Ra hi CCR6 + lymphocytes more quantitatively, experiments were carried out using ImageStreamX imaging flow cytometry (Amnis Corp). Cells with the CD169 + IL-7Ra hi CCR6 + marker profile were gated and CD169 surface distribution was analyzed. Similar to our observations with sorted cells, we observed CD169 + ''blebs'' of various sizes on the surface of the cells ( Figure 3A, B). To quantify the fraction of cells with small compared to large CD169 + ''blebs'', we calculated the area of CD169 staining on the gated cells and separated the images into small, medium, and large area gates ( Figure 3A, B). Most CD169 + IL-7Ra hi CCR6 + cells had a small area of CD169 staining, while approximately 15% had a large area of staining ( Figure 3B). CD169 + IL-7Ra hi CCR6 + cells in the small area gate (R11) tended to have small, punctate CD169 staining, while those in the large area gate (R13) tended to have CD169 staining covering the majority of the cell ( Figure 3B). Whether these rare cells with a large area of CD169 staining have acquired very large CD169 + blebs or actually express CD169 will require future study.
High CD169 staining was recently reported on a subset of CD11c hi tumor-antigen-presenting lymph node cells [16]. Given the above findings, we wondered whether CD169 hi CD11c hi cells all express CD169, or whether some might be CD11c hi dendritic cells that have acquired CD169 + blebs. Consistent with the latter possibility, analysis of mixed bone marrow chimeras revealed no difference in CD169 staining on Siglec1 2/2 and Siglec1 +/+ CD11c hi cells (Supplemental Figure S2B). Thus, like IL-7Ra hi CCR6 + lymphocytes, CD11c hi cells can acquire CD169 in trans. We then asked whether we could visualize CD169 + blebs on CD11c hi cells using imaging flow cytometry. Indeed, analysis of gated CD169 + CD11c + cells showed bleb-like CD169 staining on many of these cells ( Figure 3D). Quantification of the CD169 staining area revealed that the majority of CD169 + CD11c hi cells had a small area of CD169 staining, corresponding to CD11c hi cells with small blebs of CD169 staining ( Figure 3C, D). Less than 15% of CD169 + CD11c hi cells had a large area of CD169 staining, which sometimes appeared uniform and covered much of the cell ( Figure 3C, D). These cells may correspond to rare CD11c hi cells that express CD169 or that have acquired large CD169 + blebs. Together, these data suggest that the majority of CD169 + CD11c hi cells are dendritic cells that have acquired CD169+ blebs.
CD169 bleb + cells are enriched for IL-7Ra hi CCR6 + cells and NK cells
These results suggested that IL-7Ra hi CCR6 + cells do not express CD169, but rather that they acquire CD169 + macrophage-derived blebs. Bleb acquisition does not appear to be a random process, as CD169 + staining was enriched on IL-7Ra hi CCR6 + lymphocytes compared to total T and B cells ( Figure 4A). Furthermore, we observed enrichment for CD169 staining on NK1.1 + DX5 + CD3e 2 NK cells, suggesting that NK cells also acquire blebs from CD169 hi macrophages ( Figure 4A). This may suggest that these cell types have a unique capacity to capture blebs that are shed by the SSMs either in vivo or during tissue preparation. Alternatively, given that IL-7Ra hi cells are located near CD169 + SSMs in situ ( Figure 1B and Supplemental Figure S1), bleb acquisition may be dependent on localization adjacent to SSMs. Notably, tissue digestion was not required for CD169 + bleb acquisition as the fraction of IL-7Ra hi CCR6 + lymphocytes that were CD169 + by flow cytometry was similar in digested compared to undigested cell suspensions ( Figure 4B). The total number of IL-7Ra hi CCR6 + lymphocytes was greater in digested samples, suggesting that while digestion improved the recovery of these lymphocytes, it was not required for bleb acquisition per se.
IL-7Ra hi CCR6 + lymphocytes are IL-17 committed cells that interact with CD169 + macrophages IL-7Ra hi CCR6 + cells express high levels of CXCR6 ( [19] and Figure 5A), a chemokine receptor enriched on effector and memory T cells [22]. Consistent with an effector phenotype, flow cytometric analysis revealed that IL-7Ra hi CCR6 + cells were CD44 hi and CD62L lo ( Figure 5A). Based on their expression of CCR6, a marker of IL-17 committed cells [23], we hypothesized that these cells were programmed to make IL-17A. Indeed, the IL-7Ra hi CCR6 + lymphocytes rapidly produced IL-17A following phorbol ester plus ionomycin stimulation ex vivo, although there was no difference in the capacity of CD169 bleb + cells to make IL-17A compared to the bleb 2 fraction ( Figure 5B). The ability of the IL-7Ra hi CCR6 + lymphocytes to rapidly produce cytokines ex vivo in addition to their effector phenotype suggested that these cells correspond to the IL-17A committed cells recently described by several groups [24].
The IL-7Ra hi CCR6 + gate included cdT, abT, and non-T cells ( Figure 5C) all of which showed a smear of CD169 staining. The IL-7Ra hi CCR6 + cdT cells correspond to innate IL-17 producing cdT that have been described in the dermis and peripheral lymph nodes [19,25,26]. We observed that approximately 15-20% of IL-7Ra hi CCR6 + cells were CD1d-tetramer + CD3e int iNKT cells, which showed a smear of CD169 staining ( Figure 5D). These CCR6 + iNKT cells likely correspond to the IL-17 producing iNKT cells recently described in skin and peripheral lymph nodes [27].
Analysis of CD4 expression by the IL-7Ra hi CCR6 + T cells showed a smear of CD4 signal, much of which correlated with CD169 staining ( Figure 5E, top panel, upper quadrants) as well as a population of T cells with a conventional CD4 hi CD169 2 phenotype ( Figure 5E, top panel, lower right quadrant). In contrast, the IL-7Ra hi CCR6 + T cells were CD8 2 ( Figure 5E, bottom panel). The smear of CD4 staining suggested, counter intuitively, that the SSMs express CD4 and that the CD4 2 IL-7Ra hi CCR6 + T cells were acquiring CD4 + CD169 + blebs from the macrophages. Indeed, analysis of CD4 expression in situ revealed CD4 staining on the majority of CD169 + SSMs in control but not CD4-deficient mice ( Figure 5F).
Finally, we took advantage of the high expression of CXCR6 by IL-7Ra hi CCR6 + cells to gain a more precise assessment of their localization in lymph node tissue sections than can be achieved by IL-7Ra-staining alone. Approximately 60-80% of IL-7Ra hi CXCR6 hi lymphocytes identified by flow cytometry are CCR6 + [19] (and data not shown), suggesting that the majority of IL-7Ra hi CXCR6 hi cells on sections correspond to IL-7Ra hi CCR6 + IL-17-committed lymphocytes. Using Cxcr6 GFP/+ reporter mice [22], we found that the IL-7Ra hi CXCR6 hi cells were abundant in subcapsular and interfollicular regions, often adjacent to CD169 + macrophages (Supplementary Figure S3). Moreover, two photon laser scanning microscopy of intact lymph nodes revealed CXCR6 hi (GFP hi ) cells migrating in close association with CD169+ macrophages in subcapsular sinus and interfollicular regions (Supplementary Movies S1 and S2).
Discussion
We report here that many of the cells in lymph node cell suspensions that stain positively for SSM-markers are not macrophages, but rather IL-7Ra hi CCR6 + lymphocytes and NK cells that have acquired CD169 + SSM-derived membrane blebs. This conclusion is established by: (i) the incomplete concordance between surface marker expression on CD169 + cells detected in tissue sections and by flow cytometry; (ii) the low abundance of CD169 transcripts in CD169 + IL-7Ra hi CCR6 + cells by quantitative PCR as well as in microarray analysis of FACS sorted CD169 + CD11b + CD11c lo F4/80 2 cells [2,28]; (iii) the positive CD169 staining on Siglec1 2/2 IL-7Ra hi CCR6 + cells in Siglec1 +/+ hosts, and; (iv) the visualization of CD169 + blebs on the surface of sorted cells and cells examined using ImageStreamX flow cytometry. The tendency of IL-7Ra hi CCR6 + lymph node cells to acquire CD169 + macrophage-derived blebs suggests there may be a propensity for these two cell types to interact in vivo. Future studies should explore whether a specific interaction between IL-7Ra hi CCR6 + lymphocytes and CD169 + macrophages occurs in vivo, and, if so, define the functional consequences.
An important question that arises from these observations is whether bleb acquisition by IL-7Ra hi CCR6 + lymphocytes occurs in vivo or during cell preparation. Arguing against acquisition being a prominent process in vivo, during real-time imaging studies of intact lymph nodes in mice intravitally labeled with CD169 antibodies, we have so far not observed CD169 marker acquisition by migrating lymphocytes (Supplementary Movies S1 and S2, and data not shown). Therefore, we favor the view that bleb acquisition predominantly occurs during tissue preparation. The finding that CD169 acquisition by IL-7Ra hi CCR6 + cells was similar whether the lymph nodes were gently teased apart and enzyme digested or simply mechanically separated suggests that SSMs may be highly prone to fragmentation, perhaps due to strong attachments to the surrounding cells or extracellular matrix. Alternatively, the cells may be programmed to undergoing blebbing during apoptosis. In this regard it is notable that dying germinal center B cells in intact lymph nodes can release blebs that are acquired by neighboring lymphocytes [29] and active ROCKdependent membrane blebbing during apoptosis has been reported in a number of in vitro studies [30]. In addition, one physiological mechanism of axon pruning involves active membrane blebbing [31]. Whatever the mechanism for the SSM blebbing, the present findings highlight the challenges associated with isolating pure populations of SSMs, challenges that likely extend to other cell populations with long membrane processes, such as the closely related CD169 + marginal metallophillic macrophages (MMMs) in the spleen and the non-hematopoietic follicular dendritic cells in all secondary lymphoid tissues. MMM isolation and analysis by flow cytometry has been reported in a number of studies but few of these studies have examined the isolated cells by microscopy. In a report where the low density (macrophage-enriched) fraction of cells from digested spleen and lymph nodes was examined by microscopy, less than 0.1% of the cells stained for markers corresponding to MMM and SSM; these cells were noted to have a granular cytoplasm and in some cases appeared tightly associated with lymphocyte-sized cells [32].
Given that CD169 bleb + cells are enriched for IL-7Ra hi CCR6 + cells and NK cells, in future efforts to isolate SSMs it will be important to include IL-7Ra, CCR6, and NK1.1 in addition to CD3e and B220 in a lineage 'dump' gate, while keeping in mind that SSMs are CD4 + . In our initial efforts to perform such an analysis we have found that there are very few lineage 'dump' negative cells (unpubl. obs.). However, it must also be considered likely that SSMs themselves will be strongly associated with IL-7Ra hi CCR6 + cells, causing them to be lost during doublet or 'dump' gating. In future studies, it will be important to perform microscopy on CD169 hi cells of large size, including cells that might be classified as doublets on flow cytometry, to more definitively test for the amount of SSM recovery that can be achieved with current tissue digestion procedures. Macrophage reporter mice, such as the MacGreen mice [33] or Lysozyme Mcre [34] x Rosa-stop flox YFP mice may be of utility in identifying intact SSMs, given that the cytoplasmic reporter molecules may be restricted to intact macrophages and absent from macrophagederived blebs. Until improved isolation and cell tracing procedures are developed it will be important to confirm any unique properties of the cells suggested based on gene expression analysis of sorted cells [2,15,17] through assessments of gene or protein expression by the cells in situ or after isolation from snap-frozen tissue by laser capture microscopy [35].
Our finding that mouse SSMs express CD4 is consistent with an earlier study in rats showing CD4 expression by these cells [36]. Most other mouse macrophage populations, including peritoneal macrophages, Kupffer cells, and red pulp macrophages do not express CD4, in contrast to rats as well as humans, in which CD4 expression by monocytes and macrophages is more widespread [37]. Thus, CD4 expression may be a special feature of these lymph node macrophages. CD4 expression by human lymph node sinus macrophages, which may correspond to mouse subcapsular sinus macrophages, has been reported [38]. SSMs are thought to be uniquely permissive for viral replication [12,15,35] raising the possibility that expression of the HIV-coreceptor CD4 by SSMs may play an important role during HIV infection and perhaps also in the capture and display of HIV-1 virions for recognition by B cells.
We describe a population of IL-7Ra hi CCR6 + CXCR6 hi lymphocytes that are abundant at the subcapsular sinus and in interfollicular regions. This is a diverse population of cells, including cdT, abT, and non-T cells, all of which rapidly produce IL-17A when stimulated with PMA and ionomycin ex vivo. The IL-7Ra hi CCR6 + TCRcd + cells correspond to innate IL-17 producing cdT that have been described in the dermis and peripheral lymph nodes [19,25,26]. 15-20% of IL-7Ra hi CCR6 + cells were iNKT cells, which likely correspond to the IL-17 producing iNKT cells recently described in skin and peripheral lymph nodes [27]. In another study, approximately 10% of adoptively transferred iNKT cells localized to the interfollicular or subcapsular sinus regions of the lymph node [17], consistent with the notion that at least a subset of iNKT localize near SSMs in the steady state. What types of effector cells the remaining TCRb + CD4 and CD8 double negative cells and CD3e 2 cells in the CD169 + IL-7Ra hi CCR6 + gate correspond to is unclear, although they may include IL-17 producing LTi-like cells [24,39].
The presence of IL-17 committed lymphocytes in or near the subcapsular sinus and their migration in close association with CD169 + macrophages raises the possibility that these cell types functionally interact. The spontaneous clustering or 'swarming' of CXCR6 hi (GFP hi ) cells in close proximity with SSMs observed in some of our imaging experiments (Supplementary Movie S2) also provides support for crosstalk between these cells though the type(s) of stimuli that provoke this behavior are not yet defined. IL-17 plays an important role in immunity at barrier surfaces, such as the skin [24]. One might consider the subcapsular sinus lining cells as a second barrier, given the dense network of macrophages and size-exclusion properties of this site as well as the constant exposure to lymph fluid that delivers antigens to the sinus within seconds to minutes of inoculation [1,3,40,41]. CD169 + macrophages, including cells at the subcapsular sinus and in interfollicular regions, constantly sample the lymph draining the skin for antigen and likely also for inflammatory cytokines. Thus, one intriguing possibility is that SSMs sense lymph-derived signals and activate adjacent lymphocytes by producing IL-17-promoting cytokines, such as IL-1b and IL-23 [42,43] or upregulating presentation of their cognate ligands, such as the recently described endogenous iNKT ligand, b-GlcCer [44]. Early production of IL-17 and possibly additional cytokines may enhance pro-inflammatory cytokine production and microbicidal activity of the macrophages [45,46] and help maintain the barrier function of the subcapsular sinus while influencing the initial induction of effector T [42] and B cells [47] within the lymph node parenchyma. Future studies should explore whether crosstalk between IL-17 committed lymphocytes and SSMs plays a role in protecting the lymph node from invading pathogens and in guiding early phases of lymph node immune responses.
Ethics Statement
All experiments conformed to ethical principles and guidelines approved by the UCSF Institutional Animal Care and Use Committee under protocol authorization number AN087331-01.
Tissue preparation
Unless otherwise indicated, lymph nodes were digested as described [2]. Briefly, lymph nodes were teased apart in DMEM containing penicillin/streptomycin and HEPES buffer and digested with 67 mg/ml Liberase TM (Roche) and 20 mg/ml DNAse I (Sigma) for 20 minutes while rotating. The digestion was then quenched by the addition of 10% fetal bovine serum (FBS) and 5 mM EDTA and lymph nodes were disaggregated by mashing through 100 mm nylon sieve (BD Bioscience).
To detect IL-17A, cells were stimulated for 2 h with 50 ng/ml PMA (Sigma) and 1 mg/ml Ionomycin (I, EMD Biosciences) in Brefeldin A (BD Biosciences), stained for surface antigens, treated with BD Cytofix Buffer and Perm/Wash reagent (BD Biosciences), and stained with anti-IL-17A.
RNA was extracted and Siglec1 mRNA was quantified as previously described [2]. For imaging, a fraction of sorted cells were washed with PBS containing 0.1% BSA and 0.1% sodium azide and applied to a chambered coverslip (CultureWell, Molecular Probes). After allowing cells to settle for 20 minutes at room temperature, supernatent was carefully aspirated from the coverslips. The cells were then fixed to the coverslip for 10 minutes at room temperature with 40 ml of 4% paraformaldehyde. Coverslips were washed with PBS, stained with DAPI, and then mounted onto a glass slide. Images were acquired with a Zeiss AxioObserver Z1 inverted microscope using equivalent exposure times. Optimal exposure times were determine based on the CD169 staining intensity of the majority of the sorted cells.
Two-photon microscopy
CFP + B cells were transferred intravenously and anti-CD169biotin/streptavidin-PE was injected s.c. to label SSMs in Cxcr6 gfp/+ mice 20-24 hours prior to imaging. Lymph node explants were prepared for imaging as previously described [29,54] and imaged with a Zeiss LSM 7MP equipped with a Chameleon laser (Coherent). Fluorophores were excited at 870 nm and detected with 450-490 nm (CFP), 500-550 nm (GFP) and 570-640 nm (PE) emission filters. Images were acquired with Zen (Zeiss), and time-lapse images generated with Imaris 7.4.0 (Bitplane). Videos were processed with a Gaussian noise filter. Annotation and final compilation of videos were with After Effects 7.0 software (Adobe Systems). Video files were converted to MPEG format with AVI-MPEG Converter for Windows 1.5 (FlyDragon Software). In some early experiments (imaging data not shown), images were acquired and videos generated as previously described [55]. Figure S1 IL-7Ra hi lymphocytes are located adjacent to CD169+ macrophages. Three additional examples of lymph node sections stained as in Figure 1B to detect CD169 (green) and IL-7Ra (red). FO, follicle; T, T zone. Scale bar = 50 mm. (TIF) Figure S2 CD169 staining on Siglec1 -/-(CD169-deficient) IL-7Ra hi CCR6 + lymphocytes and CD11c hi dendritic cells in mixed bone marrow chimeras. Flow cytometric detection of CD169 staining on CD45.2 + Siglec1 -/or Siglec1 +/+ (red) compared to CD45.1 + Siglec1 +/+ (blue) IL-7Ra hi CCR6 + cells (A) and CD11c hi cells (B) in mixed bone marrow chimeric mice. Data are representative of two experiments. (TIF) Figure S3 IL-7Ra hi CXCR6 hi lymphocytes at the subcapsular sinus and interfollicular regions. Immunofluorescence microscopy of lymph node sections from Cxcr6 GFP/+ mice stained with anti-CD169 (blue) and anti-IL-7Ra (red) monoclonal antibodies. Two examples are shown and are representative of sections from four lymph nodes from two mice. FO, follicle; T, T zone. Scale bar = 50 mm.
(TIF)
Movie S1 Two photon imaging of CXCR6 hi cells migrating in association with CD169 + macrophages. Intravital TPSLM showing CXCR6 hi cells migrate in close association with CD169 + macrophages (labeled with anti-CD169-PE) at the subcapsular sinus and interfollicular regions of an explanted Cxcr6 GFP/+ lymph node. Movie shows 12 mm maximum intensity z projection. Time is shown as hr:min:sec. Data in Movies S1 and S2 are representative of one experiment in which a total of five follicles in two lymph nodes were analyzed. Similar findings were observed in a second experiment of this type, in which macrophages were labeled with PE-ICs as described [2]. FO, follicle; IF, interfollicular region; SCS, subcapsular sinus.
(MPG)
Movie S2 Two photon imaging of CXCR6 hi cells migrating in association with CD169 + macrophages. Intravital TPSLM showing CXCR6 hi cells migrate in close association with CD169 + macrophages (labeled with anti-CD169-PE) at the subcapsular sinus and interfollicular regions of an explanted Cxcr6 GFP/+ lymph node. The clustering of CXCR6 hi cells seen in Movie S2 was observed in 4 of 10 movies. Movie shows 12 mm maximum intensity z projection. Time is shown as hr:min:sec. FO, follicle; IF, interfollicular region; SCS, subcapsular sinus. (MPG)
|
v3-fos-license
|
2018-11-12T16:28:51.700Z
|
1996-01-01T00:00:00.000
|
83067942
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://newprairiepress.org/cgi/viewcontent.cgi?article=3253&context=kaesrr",
"pdf_hash": "802db59e2e32f67cc6460eeac6a99bc8220aa42a",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:828",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "74c7344a4d067f793e716302a4b0e4496fe5c324",
"year": 1996
}
|
pes2o/s2orc
|
Coping with summer weather : management strategies to control heat stress
Heat stress occurs when a dairy cow's heat load is greater than her capacity to lose heat. The effects of heat stress include: increased respiration rate, increased water intake, increased sweating, decreased dry matter intake, slower rate of feed passage, decreased blood flow to internal organs, decreased milk production, and poor reproductive performance. The lower milk production, and reproductive performance cause economic losses to commercial dairy producers. This review will discuss methods that can be used on commercial dairy farms to reduce the effects of heat stress on dairy cattle.; Dairy Day, 1996, Kansas State University, Manhattan, KS, 1996;
Measuring Heat Stress
The severity of heat stress usually is quantified by a temperature humidity index (THI).Both ambient temperature and relative humidity are used to calculate a THI.A THI above 72 is associated with heat stress in dairy cattle.The THI's at various temperatures and relative humidities are presented in Figure 1.Dairy producers can purchase a thermometer/hygrometer and use Figure 1 to determine the level of heat stress at different locations on the dairy.
Heat Loss in Dairy Cows
Dairy cows dissipate heat in several ways, including conduction, convection, radiation, and evaporative cooling.Con-duction is based on the principal that heat flows from warm to cold.This method of heat loss requires physical contact with surrounding objects.An example of conductive cooling would be when a cow wades into a pond of water.Cooling by convection occurs when the layer of air next to the skin is replaced with cooler air.Radiation of body heat can occur when the ambient temperature is significantly cooler than the cow.At cooler temperatures, dairy cattle are efficient at radiating heat.Evaporative cooling occurs when sweat or moisture is evaporated away from the skin or respiratory tract.This is why dairy cattle perspire and increase respiration rates during heat stress.High humidity limits the ability of the cow to take advantage of evaporative cooling.When the ambient temperature is under 50 degrees F, nonevaporative methods of cooling account for 75% of the heat loss.At temperatures above 70 degrees F, evaporative cooling is the cow's primary mechanism for heat loss.Dairy producers can take advantage of the same mechanisms to cool dairy cows on the farm.
Water Availability
Providing access to water during heat stress is critical.Lactating dairy cattle will typically require between 35 and 45 gallons of water per day.Studies completed in climatic chambers indicate that water needs increase 1.2 to 2 times when cows are under heat stress.A water system needs to be designed to meet both peak demand and daily needs of the dairy.Making water available to cows leaving the milking parlor will increase water intake by cows during heat stress.Access to an 8-ft water trough is adequate for milking parlors with less than or equal to 25 stalls per side.When using drylot housing, we recommend having water troughs at two locations and 30 ft of trough perimeter per 100 cows or 80 ft of trough perimeter for 200 cows.In free-stall housing, one waterer or 2 ft of tank perimeter is adequate for every 15 to 20 cows.An ideal situation would be to have water available at every crossover between feed and resting areas.
Shades
Cows housed in drylot or pasture situations should be provided with solid shade.Research from Florida and Arizona indicates that when high-producing cows are exposed to direct sunlight and a THI exceeds 80 during daylight hours, shaded cows will produce approximately 4 to 5 lb of additional milk per day.Natural shading provided by trees is effective, but most often shades are constructed from solid steel or aluminum.Providing 38 to 45 square ft of solid shade per mature dairy cow is adequate to reduce solar radiation.Shades should be constructed at a height of a least 12 ft with a north-south orientation to prevent wet areas from developing under them.Using more porous materials such as shade cloth or snow fence is not as effective as solid shades.
Holding Pen
The holding pen is where dairy cows probably experience the most heat stress.Putting cows into a holding pen is similar to putting several large furnaces into a small area with the thermostat stuck on 100 degrees F. On most days, cows would benefit from shade over the holding pen and opensided holding areas to provide ventilation.Installing fans will help ventilate the holding pen.The level of heat stress in the holding pen can be measured by holding a thermometer/hygrometer on a long rod over the top of the cows to determine the temperature and relative humidity.These values then can be used to determine a THI from Figure 1.
Cows can be cooled in the holding pen before milking.This method uses low volume sprinklers to wet cows and large fans to hasten evaporation of the water.In this way, cows are cooled as often as they are milked.Both spray and fans should be operated continuously using approximately 1000 CFM of air per cow per hour.Fans should be mounted overhead at a 30 degree angle from vertical, so that the air will blow down on cows.Water lines in front of the fans spray 7 to 10 gallons of water per hour at 125 to 150 PSI.Fans of 36-to 48-inch diameter are used most commonly.In an Arizona study, body temperature was lowered 3.5 degrees F resulting in 1.7 lb of extra milk per day per cow cooled in the holding pen.Fans and water spray should be used during the summer months whenever the ambient temperature exceeds 80 degrees F (day or night).There also is an advantage in using the fans only when the temperature is between 80 and 90 degrees F.
Exit Lane Cooling
Cows can be cooled as they exit the parlor.Typically three to four nozzles are installed in the exit lane, with a delivery of approximately 8 gallons of water per minute at 35 to 40 PSI.The nozzles are turned on and off with an electric eye or wand switch as the cow passes under the nozzles.If properly installed, the top and sides of the cow are wet but the head and udder will remain dry, so water will not interfere with postmilking teat dipping.
Free Stalls
Free-stall housing should be constructed to provide good natural ventilation.Sidewalls should be 12 to 14 ft high to increase the volume of air in the housing area.The sidewalls should be able to open a minimum of 50% and preferably 75 to 100%.Fresh air should be introduced at the cow's level.Curtains on the sides of free-stall barns allows greater flexibility in controlling the ventilation.Because warm air rises, steeper sloped roofs provide upward flow of warm air.However, roofs with slopes steeper than a 6:12 pitch prevent incoming air from dropping into the area occupied by the cows.Roofs with slopes less than 4:12 may cause condensation and higher internal temperatures in the summer.Roof slopes for free-stall housing should range from 4:12 to 4:16.Providing openings in end walls and alley doors will improve summer ventilation.Gable buildings should have a continuous ridge opening to allow warm air to escape.The ridge opening should be 2 inches for each 10 ft of building width.Naturally ventilated buildings should be spaced a minimum of 50 ft apart.
Additional cooling in free-stall areas can be provided by adding fans and a sprinkler system.Free-stall bedding or sand must not become wet.Typically, a sprinkler system could be located over the lockups, and fans could be used over the free stalls, lockups, or both.The sprinkler system can be put on a timer to reduce water usage.
Figure 1 .
Figure 1.Temperature Humidity Index at Various Combinations of Temperature and Relative Humidity
|
v3-fos-license
|
2019-04-06T00:41:26.374Z
|
2019-04-04T00:00:00.000
|
96435154
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-019-42220-y.pdf",
"pdf_hash": "a59a160145397b66af68f3498c9d16c448592101",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:830",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "a59a160145397b66af68f3498c9d16c448592101",
"year": 2019
}
|
pes2o/s2orc
|
Metabolic stress controls mutant p53 R248Q stability in acute myeloid leukemia cells
Eliminating mutant p53 (mt p53) protein could be a useful strategy to treat mt p53 tumors and potentially improve the prognosis of cancer patients. In this study, we unveil different mechanisms that eliminate p53-R248Q, one of the most frequent mutants found in human cancers. We show that the Hsp90 inhibitor 17-AAG eliminates R248Q by stimulating macroautophagy under normal growth conditions. Metabolic stress induced by the pyruvate dehydrogenase kinase-1 (PDK1) inhibitor dichloroacetate (DCA) inhibits the macroautophagy pathway. This induces the accumulation of R248Q, which in addition further inhibits macroautophagy. Combination of DCA and 17-AAG further decreases the autophagy flux compared to DCA alone. Despite this, this co-treatment strongly decreases R248Q levels. In this situation of metabolic stress, 17-AAG induces the binding of p53-R248Q to Hsc70 and the activation of Chaperone-Mediated Autophagy (CMA), leading to higher R248Q degradation than in non-stress conditions. Thus, different metabolic contexts induce diverse autophagy mechanisms that degrade p53-R248Q, and under metabolic stress, its degradation is CMA-mediated. Hence, we present different strategies to eliminate this mutant and provide new evidence of the crosstalk between macroautophagy and CMA and their potential use to target mutant p53.
Wild-type p53 (wt p53) was refereed once as "the guardian of the genome" for its important role as a tumor suppressor gene 1 . Today p53 is not only known as a tumor suppressor but also a master regulator of many cellular processes such as cell cycle, apoptosis, DNA repair, inflammation and metabolism 2 . The p53 gene is the most frequent target for mutation in human cancer, including hematological malignancies 3 . The frequency of p53 mutations in acute myeloid leukemia (AML) is approximately 10%. However, in AML with complex karyotype, the rate of p53 mutations and/or deletions is almost 70% 4 . Furthermore, p53 mutations are associated with poor prognosis and decreased survival in AML.
Mutations are found in all coding exons of the p53 gene, but most of them are located in the DNA-binding domain, with the most common in codons 175, 245, 248, 273 and 282. These are the "hot spot" residues, which are very frequently mutated in all types of cancer 5 . These mutations do not always correlate with loss of function of p53 and can actively promote tumor growth by gain-of-function (GOF) mechanism [6][7][8] . The important role of GOF by mutant p53 (mt p53) is further supported by the finding that patients carrying missense mutation and expressing mt p53 in the germline have a significantly earlier cancer onset than patients with mutations in TP53 that result in loss of p53 protein 9,10 . Moreover, mt p53 accumulation is critical for p53 oncogenic GOF that actively contributes to cancer development and progression 11 .
R248 is mutated into three amino acids R248Q, R248W and R248L 12 . Interestingly, p53-R248Q, but not p53-R248W, confers invasive ability when overexpressed in p53-null cells 13 . Thus, not only the position of the mutation but also the nature of the substitution may influence the activity of the resulting mt p53 protein. In fact, mutant R248Q induces more aggressive tumors in mice compare with other hotspot mutants [14][15][16] . R248Q has a greater tendency to aggregate and can seed the aggregation of wt p53. In breast cancer samples, R248Q aggregates into prion-like amyloid oligomers sequestrating and inactivating wt p53 17 . Codon 248 of the p53 protein is most frequently mutated in pancreatic tumors (based on cBioPortal), in lymphomas 18 , myelodysplastic syndromes (MSD) and AML 19,20 . In summary, it is essential to further study mechanisms reducing the function of this p53 mutant, but with a minimal effect on wt p53. Wt p53 stability is mainly control by the proteasome-ubiquitin pathway, however it is still unclear which pathway degrades mt p53. In response to different stresses, both wt and mt p53 accumulate in cells. While wt p53 returns to basal level following recovery from stress, mt p53 remains stable 21 . Certain mt p53 proteins accumulate to high levels in tumor cells 22 due to its interaction with the chaperones Hsp70 and Hsp90. Hsp90 inactivates the E3 ligases MDM2 and CHIP, impairing proteasomal degradation of mt p53 23 . mt p53 degradation also occurs by different types of autophagy: macroautophagy and Chaperone-Mediated Autophagy (CMA) 24 . Macroautophagy, induced by glucose restriction or by proteasomal inhibition, promotes mt p53 degradation 25 . When nutritional deprivation inhibits macroautophagy, CMA is activated and induces mt p53 degradation 26 . For further complexity, mt p53 can inhibit autophagy 27,28 .
One approach to target mt p53 is to reduce mt p53 levels with little effect on wt p53 using compounds that promote degradation of mt p53 such as the Hsp90 inhibitor 17-AAG 23,29 . 17-AAG is a geldanamycin analogue, currently in clinical trials as anticancer drug that triggers the activation of a heat shock response, promotes proteasome degradation and induces the autophagic pathway [30][31][32] .
In this study, we uncover different mechanisms that promote mutant p53-R248Q depletion in different cellular contexts. In tumors growing in normal, no stress, conditions, 17-AAG eliminates R248Q through macroautophagy. However, in tumors with macroautophagy inhibition and high stability of mt p53, 17-AAG still was able to induce mt p53 degradation through CMA. Also we showed that metabolic stress caused the pyruvate dehydrogenase kinase-1 (PDK1) inhibitor dichloroacetate (DCA) promotes higher accumulation and stabilization of R248Q protein by increasing its interaction with the Hsp90 chaperone machinery. Furthermore, accumulation of R248Q prevents macroautophagy by inhibiting the expression of several macroautophagy genes. Our data demonstrate that there is a negative feedback loop between macroautophagy and mt p53. Under DCA-induced metabolic stress, when macroautophagy is largely reduced, 17-AAG induces mt p53 degradation through CMA.
Results
Different effect of DCA and 17-AAG on R248Q stability. DCA causes metabolic stress by inhibiting PDK1 in AML cells 33,34 . This forces cells to decrease glycolysis and increase oxidative phosphorylation (OXPHOS) [33][34][35] . DCA induces wt p53 transcriptional activity via AMPK and its efficacy to cause cell cycle arrest depends on p53 status 29 . Besides, we observed that both wt p53 and mt p53 protein levels accumulated after DCA treatment including in the NB4 cell line, which carries p53 R248Q 29 . We extensively investigated the functional activity of wt p53 after DCA treatment and found that p53 induced cell cycle arrest in G0/G1 phase, although failed to induce programmed cell death (PCD). Cell cycle arrest involved p53 transcriptional activity because we observed upregulation of MDM2 and p21 mRNAs 29 ( Supplementary Fig. 1a). We also described in this work that DCA-induced metabolic stress depended in wt p53 and involved mRNA expression of its metabolic targets GLS2, SCO2 and AMBKβ 29 . Moreover, DCA increased ROS expression and disturbed oxygen consumption 29 .
We confirmed here that DCA induced accumulation of mt p53 protein without affecting p53 mRNA ( Fig. 1a and Supplementary Fig. 1b). Hsp90 inhibition by 17-AAG promoted R248Q degradation (Fig. 1b). Surprisingly, co-treatment with DCA and 17-AAG was more effective than 17-AAG alone (Fig. 1c). This effect was only observed at protein level. R248Q mRNA was not affected by DCA + 17-AAG co-treatment ( Supplementary Fig. 1c). This shows that DCA effects mainly rely in protein stability as previously proposed 29 .
To understand how these two drugs, with individually opposite effects on mt p53 protein levels, synergy to eliminate R248Q, we study the stability of R248Q in presence of DCA and 17-AAG alone and in combination. First, we determined whether the augmentation in R248Q protein correspond with an increase of stability following DCA treatment. To measure the half-life of mtp53, NB4 cells were treated for 24 h with 10 mM DCA. Next, we added at different times the inhibitor of protein synthesis cycloheximide and analyzed the levels of p53 by western blotting. As shown in Fig. 1d, the stability of R248Q was higher after DCA treatment suggesting that DCA interferes with mt p53 degradation. Then, we examined the stability of R248Q after treatment of DCA and/or 17-AAG. The half-life of R248Q decreased with 17-AAG (Fig. 1e). DCA + 17-AAG co-treatment further reduced R248Q level. These data indicate that decrease of R248Q protein levels is due to decrease in stability upon DCA + 17-AAG treatment and not to inhibition of mtp53 mRNA expression ( Supplementary Fig. 1c).
Hsp90 controls R248Q stability. There are two main routes of protein degradation in eukaryotes, the ubiquitin-proteasome and the autophagy-lysosome pathways. The first predominantly regulates the stability of wt p53. Little is known about the degradation of mutant p53. In general, mt p53 proteins show increased stability compared to the wt protein due to their interaction with the Hsp90 chaperone complex 23 . We compared the endogenous levels of p53 protein from two AML cell lines with different p53 status, OCI-AML3 cells (wt p53) and NB4 cells (R248Q). NB4 cell line showed higher expression of p53 (Fig. 2a). The proteasome inhibitor MG132 induced accumulation of wt p53 but not of p53-R248Q (Fig. 2b). The magnitude of the effect of MG132 treatment indicates that the proteasome is the major route for wt p53 degradation, but not for R248Q. This result suggests that different degradation pathways control the stability of wt p53 and R248Q.
Next, we examined the role of the lysosome pathway by using the lysosome inhibitor chloroquine (CQ) and wortmannin, an inhibitor of the early stage of macroautophagy. Both inhibitors caused accumulation of R248Q protein levels (Fig. 2c). The increase of the macroautophagy markers, LC3-I and LC3-II detected after treatment of wortmannin and CQ respectively, indicates that both compounds were functional. MG132 did not affect LC3-I and LC3-II or mt p53 levels. Hence R248Q degradation is mainly dependent on the lysosomal, but not on the proteasomal pathway.
The molecular chaperone Hsp90 maintains the conformation, stability and activity of several oncogenic proteins including specific mt p53 proteins 36 . As 17-AAG promotes degradation and decreases stability of R248Q (Fig. 1b), we studied the role of Hsp90. We monitored the p53-Hsp90 interaction by co-immunoprecipitation experiments after treatment with DCA and/or 17-AAG (Fig. 2d). This co-immunopecipitation experiment was www.nature.com/scientificreports www.nature.com/scientificreports/ performed for 8 hours to avoid the complete degradation of p53. NB4 cell extracts were used for immunoprecipitation with the anti-p53 antibody (DO-1) and the eluates blotted against Hsp90. R248Q interacted with the chaperone Hsp90 (Fig. 2d, lane1). As expected, inhibition of Hsp90 with 17-AAG reduced the interaction of R248Q with Hsp90 (Fig. 2d, lane 2) while in contrast DCA increased the R248Q-Hsp90 binding (Fig. 2d, lane 3). DCA + 17-AAG co-treatment decreased Hsp90-R248Q interaction compared to DCA alone (Fig. 2d, lane 4). These results indicate that Hsp90 is an important regulator of R248Q stability. 17-AAG is currently in clinical trials as an anticancer drug that specifically inhibits Hsp90 functions 32 , but up-regulates other HSPs. 17-AAG promotes mt p53 proteasomal degradation 23 and autophagy to remove aggregates 31 . To investigate whether 17-AAG enhances proteasomal or lysosomal degradation in wt p53 and mt p53 AML cells, these were exposed to 17-AAG at different www.nature.com/scientificreports www.nature.com/scientificreports/ doses for 24 h and then treated with the proteasomal inhibitor MG132 or the macroautophagy inhibitor bafilomycin A1 (Baf A1) for 4 hours before analyzing samples by western blotting. Baf A1 is an inhibitor of the late phase of autophagy that prevents the fusion between autophagosomes and lysosomes. In NB4 cells, inhibition of the proteasome with MG132 failed to accumulate mt p53, instead resulted in a reduction (Fig. 3a). In contrast, inhibition of macroautophagy by Baf A1 blocked mt p53 degradation mediated by 17-AAG. In OCI-AML-3 wt p53 was accumulated only in the presence of MG132.
17-AAG induces macroautophagy in AML cell lines.
Next, we examined whether the macroautophagy pathway mediates R248Q degradation by 17-AAG. We treated NB4 cells with 17-AAG for 24 h and then with 50 nM wortmannin and 50 uM CQ for 4 h. In contrast to DCA, both wortmannin and CQ were able to rescue degradation caused by 17-AAG (Fig. 3b). This suggests that 17-AAG causes the elimination of R248Q through induction of macroautophagy.
We next determined the effect of 17-AAG and DCA on macroautophagy flux in AML cells with different p53 status. We monitored autophagy flux by the addition of CQ (Fig. 3c). Protein extracts were analyzed by western blotting for LC3-II, a well establish macroautophagy marker. In OCI-AML-3, the autophagy flux was maintained after treatment of any of these drugs. We observed an increase in LC3-II levels after treatment with 17-AAG in NB4 cells (R248Q). Interestingly, a reduction of autophagy flux was observed after DCA treatment (Fig. 3c).
Macroautophagy is a complex sequence of biological events leading to formation, maturation, and fusion of autophagosomes with lysosomes to allow the degradation and recycling of cellular components 37 . All these events are controlled by many autophagy-related genes (ATGs), which are mostly transcriptionally induced by different stimuli such as nutritional deprivation, infections or metabolic and oncogenic stress 38 . To determine the mechanism for the increase in autophagy flux upon 17-AAG treatment in NB4 cells, we monitored mRNA levels of some essential autophagy genes such as ATG5, BECN1 and two different isoforms of LC3 (Fig. 3d). BECN1 and LC3 mRNAs were significantly increased indicating that 17-AAG may up-regulate macroautophagy by increasing the transcription of autophagy genes. Taken together all these results suggest that the degradation of R248Q by 17-AAG is dependent on macroautophagy and not on proteasomal mechanisms. Thus, the stability of wt p53 and mt p53 is controlled by different proteolytic mechanisms.
DCA inhibits macroautophagy in the presence of R248Q.
Depending on the stress and wt p53 location, i.e. nuclear or cytoplasmic, macroautophagy can be either stimulated or inhibited by this tumor suppressor 39 . Mutant p53, which is mainly cytoplasmic, causes macroautophagy inhibition by repressing the expression of some critical autophagy genes [26][27][28]40 . We studied the autophagy flux after DCA treatment in cell lines with different status of p53. Autophagy flux was again monitored by detection of LC3-II under CQ treatment. In wt (OCI-AML3) and null (HL60) p53 cell lines, DCA did not basically affect this flux. However, in the NB4 cell line carrying the R248Q p53 mutant, we observed a marked decrease in LC3-II levels (Fig. 4a).
Our results show that the effect of DCA on macroautophagy depends on the p53 status and illustrate that the presence of R248Q may impair macroautophagy. These data also suggest that there is a negative feed back loop between macroautophagy and R248Q. Macroautophagy controls mt p53 protein levels and at the same time mt p53 has an inhibitory effect on macroautophagy.
Next, we monitored the effect of the co-treatment of DCA with 17-AAG on macroautophagy by evaluating the autophagy flux in cells with different p53 status of (Fig. 4b).We did not observe any significant change in autophagy in OCI-AML-3 (wt p53) and HL-60 cells (p53 null) after DCA and/or 17-AAG treatments. Surprisingly, in NB4 cells the combination DCA and 17-AAG further decrease the autophagy flux compared to DCA alone and it was co-related with an enhanced reduction of R248Q protein. Hence, decreasing mt p53 does not rescue the inhibition induced by metabolic stress on macroautophagy flux. Moreover, CQ partially blocked R248Q degradation induced by 17-AAG and DCA + 17-AAG indicating that the lysosomal pathway could be responsible for R248Q removal after these treatments. Therefore, there should be an alternative pathway to degrade mt p53 in conditions where macroautophagy is inhibited.
Chaperone-mediated autophagy degrades R248Q under metabolic stress. CMA can modulate the degradation of multiple Hsp90 client proteins 41 . p53-R248Q significantly decreased with the DCA + 17-AAG combination under conditions where macroautophagy was impaired (Figs 1 and 4b), suggesting a possible role for CMA. In agreement with this hypothesis, 17-AAG increased Hsc70 protein levels even in the presence of DCA in mt and wt p53 cell lines (Fig. 5a). This finding is important because Hsc70 is the only known chaperone to mediate substrate targeting for CMA 42 . To investigate whether CMA is responsible for R248Q degradation, we tested the interaction of R248Q with Hsc70 (Fig. 5b). Both 17-AAG and DCA increased the interaction between R248Q and Hsc70, and this interaction was preserved with the co-treatment DCA + 17-AAG. 17-AAG decreased the interaction of Hsc70 and Hsp90. Our results indicate that inhibiting Hsp90 function has no effect on the binding of R248Q with Hsc70 and neither Hsc70 activity. Importantly, Hsp90 inhibition may be required to induce CMA by 17-AAG. This data also suggests that different pathways are engaged by Hsp90 and Hsc70 to control the stability of R248Q (Fig. 5b).
We further investigated which type of degradation (proteasomal, macroautophagy or CMA) was induced in the presence of DCA and 17-AAG. For this purpose, NB4 cells were treated with DCA and/or 17-AAG with the addition of MG132, 3-Methyladenine (3-MA), wortmannin and CQ (Fig. 5c). We used hence three inhibitors to target autophagy at different steps. 3-MA and wortmannin inhibits autophagosome formation and autophagic sequestration, respectively. CQ blocks lysosomal degradation. Only the presence of CQ prevented the degradation of R248Q, indicating that under these conditions, R248Q degradation is mainly lysosomal and probably through CMA.
www.nature.com/scientificreports www.nature.com/scientificreports/ Finally, to determine whether the presence Hsc70 is essential for R248Q degradation, we knockdown Hsc70 using two specific siRNAs (Fig. 5d). Downregulation of Hsc70 blocked the degradation of R248Q by 17-AAG and DCA + 17-AAG, indicating that Hsc70 plays a vital role in the stability of R248Q. The inhibitory effect of DCA on autophagy depends on p53 status. The presence of mt p53 can have an adverse impact on macroautophagy impairing autolysosome formation 27 . Mutants of p53, including R248Q, can counteract autophagy on various phases of the process 28 . To further investigate whether DCA enhances macroautophagy inhibition via mt p53, we monitored the expression of some essential autophagy genes www.nature.com/scientificreports www.nature.com/scientificreports/ such as ATG5, ATG12, BECN1 and two different isoforms for LC3 (Fig. 6). A decrease of mRNA expression of all these genes was only observed in the NB4 cell line expressing R248Q. No decrease on gene expression was detected in OCI-AML-3 cells (wt p53) or HL60 cell (p53 null). ATG12 mRNA increased about 2-3 folds in the HL60 cell (p53 null). Hence, DCA inhibits expression of autophagy gene in the presence of R248Q.
Mutant R248Q has an inhibitory effect on macroautophagy.
To further investigate whether DCA inhibits macroautophagy via mt p53, we silenced or overexpressed R248Q in the previously portrayed AML cell lines (Fig. 7). First, we analyzed in NB4 cells the effect of DCA on the mRNA levels of different autophagy genes after reduction of www.nature.com/scientificreports www.nature.com/scientificreports/ R248Q by siRNA (Fig. 7a). R248Q down-regulation efficiently enhanced the expression of the autophagy genes indicating that the presence of R248Q inhibited the macroautophagy pathway. The knockdown of R248Q partially blocked the inhibitory effect of DCA. In particular, the inhibition of BCEN1 mRNA by DCA was entirely prevented after reduction of R248Q. Beclin-1, the protein encoded by BCEN1 gene, is a crucial component of nucleation and maturation of macroautophagy pathway, one of the early steps of macroautophagy. ATG5/ATG12 and LC3 mediate the elongation of the phagophore 43 . Based on this data, R248Q could have an inhibitory effect on the formation of the autophagosome at the early steps of the macroautophagy pathway. In addition, DCA did not increase mt p53 mRNA in NB4 cells treated with control siRNA (Fig. 7a). This shows that DCA effects mainly rely in protein stability as previously proposed 29 .
Hence, we next checked the autophagy flux after R248Q down-regulation (Fig. 7b). As we previously observed, DCA reduced autophagy flux in control siRNA transfected cells. Interestingly, the flux was restored after p53 knockdown. Furthermore, ectopic expression of wt p53 and R248Q in p53-null HL60 cells caused a reduction of the autophagy flux in untreated cells (Fig. 7c). In the presence of DCA, only R248Q further decreased the autophagy flux. All these results demonstrate the repressive role of mutant p53-R248Q in macroautophagy and suggest that the DCA-induced inhibition of macroautophagy could be due to the increase in mutant p53-R248Q levels.
Co-treatment of DCA and 17-AAG caused inhibition of macroautophagy in p53 dependent manner.
To investigate the effect of wt p53 or R248Q on macroautophagy after treatment with DCA and/or 17-AAG, we carried out experiments to overexpress and knockdown both genes. The efficiency of the knockdown of R248Q was confirmed by qPCR and western blot analysis in NB4 cells (Fig. 8a). Whereas in cells transfected with siRNA control DCA + 17-AAG decreased the autophagy flux (Fig. 8b, top panel), reduction of R248Q by siRNA re-established the autophagy flux (Fig. 8b, bottom panel). This data demonstrate a critical role of R248Q in inhibiting autophagy. To further explore the role of R248Q in autophagy, HL60 cells were transfected with plasmids encoding wt p53 or R248Q. Autophagy flux analysis showed that when R248Q was overexpressed, autophagy flux decreased after DCA + 17-AAG co-treatment (Fig. 8c, bottom panel). However, overexpression of wt p53 had little or no effect on autophagy flux (Fig. 8c, top panel). These data are consistent with the idea that this mt p53 has an inhibitory effect on the autophagy flux stimulated by metabolism stress.
Discussion
Cells growing in low resources or under metabolic stress use autophagy to survive. We show here that metabolic stress regulates different forms of autophagy depending on p53 status. DCA induces metabolic stress and p53 accumulation. In cells expressing wt p53 or lacking p53 expression, DCA does not affect the autophagy flux. In cells expressing the p53 mutation R248Q, metabolic stress represses macroautophagy and promotes p53-R248Q accumulation. In a feed back loop, R248Q represses further macroautophagy inducing even higher accumulation of R248Q protein levels. We show that 17-AAG short-circuits this loop by inhibiting Hsp90, which releases www.nature.com/scientificreports www.nature.com/scientificreports/ R248Q bound to Hsc70. In this condition, Hsc70 induces massive R248Q degradation by CMA. Interestingly, wt p53 is not affected by this treatment because is not mainly stabilized by Hsp90. Therefore; the DCA + 17-AAG is effective to eliminate mt p53.
It is now established that mt p53 acquires oncogenic functions to drive cell migration, invasion, and metastasis and p53 mutation does not represent the equivalent of p53 loss 7 . Furthermore, not all p53 mutants have the same GOF activities. Different p53 mutations impart unique activities to stimulate the development of various tumor types. Therefore, it is essential to study each p53 mutant independently to devise different target and therapeutic www.nature.com/scientificreports www.nature.com/scientificreports/ strategies. It has been reported that mt p53 could be targeted for proteasomal or autophagy degradation 24,25 , however remain unclear how mutant R248Q is degraded. In the present study, we investigated different strategies to promote the elimination of R248Q in different cellular conditions. We specifically focus on p53-R248Q mutant because is one of the most frequently found in a wide range of human cancers, especially in AML. It would be interesting to investigate if other p53 mutants are degraded by the same mechanisms than p53-R248Q. Although, it is possible that different degradation mechanisms will be activated depending on the physiological conditions, e.x. metabolic stress, and depending on the mutation. Specific mt p53 proteins accumulate to high levels in tumor cells due to defects in their degradation. Additionally, multiple stress signals can induce their stabilization and promote its GOF. This favors the development of more aggressive tumors 11 . mt p53 stabilization could be due to interaction with Hsp70 and Hsp90 chaperones that protect mt p53 from degradation 23 .
We found that autophagy and not proteasomal degradation is the primary pathway responsible for the effective elimination of R248Q. The use of DCA and 17-AAG allowed us to determine the alternative types of the autophagy pathway engaged to degrade R248Q under different conditions. DCA stabilizes R248Q by enhancing its binding with the Hsp90 chaperone and inhibiting the macroautophagy pathway. In contrast, inhibition of Hsp90 function by 17-AAG induces macroautophagy and promotes R248Q degradation. When macroautophagy is repressed under confluent conditions, CMA degrades mt p53 26 CMA contributes to energetic cellular balance and it is activated by metabolic stress 44 . We show here that DCA-induced PDK1 inhibition constrains macroautophagy and by inducing metabolic stress activates CMA. This explains why DCA + 17-AAG causes higher R248Q destabilization than 17-AAG alone.
The Hsc70 cytosolic chaperone is essential in mediating CMA 45 . It is also part of Hsp90 complex, and it can be found associated with different mt p53 proteins 26,46 . We propose that DCA pre-treatment increases the interaction of R248Q with the chaperone complex, including Hsc70, leading to its stabilization and inhibition of macroautophagy. The subsequent addition of 17-AAG caused the release of Hsp90 from the complex but without affecting the interaction with Hsc70. This promotes CMA-mediated R248Q degradation. Similar results were found in a study using an oxazoline analog of apratoxin A (oz-apraA) 47 . By inhibiting Hsp90 function, oz-apraA increases the interaction of Hsp90 clients to Hsc70/Hsp70 chaperones and promotes their degradation by CMA.
The crosstalk between different forms of autophagy pathways has been reported by various studies and the Hsc70 chaperone has been proposed as a candidate for acting as a cross-talking molecule between macroautophagy and CMA 47 . Cells with impaired CMA function were able to promote protein degradation through up-regulation of macroautophagy 48 and vice versa, inhibition of macroautophagy also contributes to further induction of CMA 45 . Macroautophagy and CMA communicate with each other and Hsc70 may be a fundamental element for this crosstalk 49 .
In summary, we provide different strategies to eliminate mutant R248Q p53 using 17-AAG. Our data show that 17-AAG induces macroautophagy to eliminate R248Q under normal growth conditions. Energy stress stimulates the accumulation of R248Q with molecular chaperones and enhances its inhibitory effect on macroautophagy. Under this stress condition, 17-AAG still can remove R248Q protein through CMA pathway. Understanding the mechanism of mt p53 degradation may help in the development of new therapeutic approaches, and it will be useful for the treatment of patients carrying p53 mutations.
Reagents, siRNAs, plasmids and transfection. DCA was purchased from Santa Cruz. 17-AAG was from Selleck. MG132 was purchased from Calbiochem. Cycloheximide, wortmannin, bafilomycin A, chloroquine and 3MA were purchased from Sigma-Aldrich. Wt and mutant p53 constructs were a gift from Dr. Shannon C Kenney. Transfection of wt p53 and R248Q was carried out using Lipofectamine RNAiMAX (Invitrogen) in Opti-MEM (Invitrogen), according to the manufacturer's instructions. p53 siRNA was a gift from Dr Xirodimas and it was ON-TARGETplus SMARTpools (mixture of 4siRNA) from Dharmacon. Two siRNA duplexes were used to knockdown Hsc70 as previously described 50 . siRNAs were synthesized by Eurofins MWG Operon. Cells (2 × 10 6 in 100 μl) were transfected with 100 nM Hsc70 siRNA, p53 siRNA or control siRNA by electroporation using Amaxa SF Cell line 4D-Nucleofector kit (Lonza Bioscience). Cells were harvested 24 to 72 h post-transfection.
Cell proliferation, viability. Cell viability and cell numbers were determined using the Muse ® Cell Analyzer (Millipore) as previously described 29 .
Western blot analysis. Primary antibodies against HSP90 (C45G5) and β-actin were purchased from Cell Signaling Technology. LC3B antibody was from GenTex. HSC70 (13D3) was purchased from Abcam. The anti-p53 antibody (DO-1) was a gift from Dr Xirodimas. Cell extracts were lysed in 2x SDS sample buffer. Proteins were resolved by SDS-PAGE and transferred to nitrocellulose or PVDF membranes using the Trans-Blot ® Turbo ™ Transfer System (Bio-Rad). Peroxidase-coupled anti-mouse and anti-rabbit secondary antibodies were used at a dilution of 1:10.000 (Sigma). Bound antibodies were detected by enhanced chemiluminescence (Millipore). To obtain the image for the western blotting the Molecular Imager Gel Doc XRS system (Biorad) was used, this system provides a reliable and sensitive imaging of chemilunescence western blots. For the quantification and analysis, we used the Image Lab Software (Biorad), a powerful and specific software for acquisition, analysis and quantification of blot images.
www.nature.com/scientificreports www.nature.com/scientificreports/ Measure of p53 half-life. To measure the half-life of mt p53, NB4 cell lines were treated with 10 mM DCA and/or 1 µM 17-AAG. After 24 hours, the protein synthesis inhibitor cycloheximide (20 µg/ml), was added at different time points. Cells extracts were prepared and the remaining mt p53 protein levels determined by Western blotting. For the quantification and analysis of the protein levels, we used the Image Lab Software (Biorad).
RT-PCR and DNA sequencing. Total RNA was extracted using NucleoSpin RNA isolation columns (Macherey-Nagel), reverse transcription was carried out using random primers. Quantitative PCR was performed as described previously 51 with SsoADV SYBR Green qPCR SuperMix (Biorad) and a CFX Connect TM Real-Time qPCR machine (Biorad). p53 and actin primers were previously described 29 . Beclin 1, ATG5, LC3a and LC3b 52 and ATG12 28 primers were previously described. All samples were normalized to β-actin mRNA levels.
Quantification of autophagy flux. Flux was monitored by the addition of CQ (50 µM) for 4 h at the end of the incubation periods. Flux was determined by subtracting control sample from CQ treated samples, thus reflecting the amount of LC3-II that accumulated in the 4 h following CQ addition.
Statistical analysis. Statistical analysis was performed using the Student's t test: *p < 0.05; **p < 0.01; ***p < 0.001. Values were expressed as the mean ± the standard error of the mean (SEM).
|
v3-fos-license
|
2019-04-06T00:42:53.899Z
|
2011-08-31T00:00:00.000
|
98202613
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1007/BF03353664.pdf",
"pdf_hash": "80302f8019f0715120a7034280fd0c8522c8f216",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:831",
"s2fieldsofstudy": [
"Materials Science",
"Physics",
"Chemistry"
],
"sha1": "8823a3abe54bdff110c5d260fa66cfeee7660332",
"year": 2011
}
|
pes2o/s2orc
|
Effective Doping of Rare-earth Ions in Silica Gel: A Novel Approach to Design Active Electronic Devices
Eu luminescence spectroscopy has been used to investigate the effective doping of alkoxide-based silica (SiO2) gels using a novel pressure-assisted sol-gel method. Our results pertaining to intense photoluminescence (PL) from gel nanospheres can be directly attributed to the high specific surface area and remarkable decrease in unsaturated dangling bonds of the gel nanospheres under pressure. An increased dehydroxylation in an autoclave resulted in enhanced red (∼611 nm) PL emission from europium and is almost ten times brighter than the SiO2 gel made at atmospheric pressure and ∼50°C using conventional Stöber-Fink-Bohn process. The presented results are entirely different from those reported earlier for SiO2:Eu 3+ gel nanospheres and the origin of the enhanced PL have been discussed thoroughly.
Introduction
Luminescent materials have been utilized widely in applications involving lighting to sensing [1]. The photoluminescence (PL) properties of silica have also been an important topic of research for a long time, but the difficulty in the incorporation of rare-earth (RE) ions attached covalently to the silica (SiO 2 ) network is still considered a great challenge. The weak PL bands with peak energies ∼1.9-4.3 eV for both bulk and thin films of SiO 2 have been reported [2]. The sol-gel method has been used to prepare nanomaterials in the form of powders, films, fibers and monoliths that are based on various metal alkoxides [3]. It has been observed that monodisperse silica nanospheres formed by hydrolysis and condensation of alkoxides using Stöber-Fink-Bohn (SFB) process gives negligible luminescence. Incorporation of inorganic luminescent centres into SFB spheres has been demonstrated by some research groups [4] us-ing RE ions, quantum dots and organic flurophores, but the procedure requires multiple processing steps and use of expensive and toxic ligands. During the last few years, there have been few reports [5][6][7][8][9][10] on intense PL emission in the visible region of the electromagnetic spectrum, by several different nanostructured materials that are highly disordered such as nanowires, porous silicon, silica-based mesoporous fractals, ferroelectrics with ABO 3 type and AWO 4 -type perovskite structures (A=Ca, Ba, Sr); and many more. The origin of this type of luminescence is always attributed to unsaturated chemical bonds in these nanostructures. In case of ferroelectrics, as their energy band gaps are located ∼3-4 eV, the PL emission in the visible region was correlated to their highly disordered states and many localized electronic levels present within the optical gap, causing luminescence [6]. Since SiO 2 gel is a well-studied system [11] having visible transparency and significant absorption peak lying in the ultra-violet National Physical Laboratory, Council of Scientific and Industrial Research, Dr K S Krishnan Road, New Delhi, 110 012, India *Corresponding author. E-mail: haranath@nplindia.org region, the introduction of dopant ions act as a perturbation to the well-studied system leading to interesting optical properties. The RE ions are used as probes in the sol-gel method due to their sensibility to change with the surrounding matrix probing the local structure [12,13]. Thus derived SiO 2 and organically modified matrix composites could be the main precursor to prepare many RE based smart optical materials [14].
In this paper, we propose a novel methodology to prepare alkoxide-based silica gel nanospheres doped with Eu 3+ ions that show enhanced PL brightness, uniform size distribution and improved quantum efficiencies. This is a process by which highly disordered but doped silica gels could be effectively made useful for practical applications involving luminescence. It has already been evidenced that high pressure and temperature leads to more closely packed structures [14]. The presented analogy is unique and could be extended to many crystalline and non-crystalline phosphor based systems to design new family of sol-gel based nanocomposites, having a wide variety of applications in lasers, chemical sensing, waveguides, bioanalytical assays, blood flow monitoring and effectively harvesting the solar energy for improving the solar cell efficiency.
Experimental
Silica nanospheres were prepared with its surface modified by sol-gel method and studied through the incorporation of Europium (III) ions in two different ways. The europium nitrate (EuNO 3 ) was mixed with sufficient ethanol (EtOH) and this was added to tetraethylorthosilicate (TEOS). The stoichiometric amount of water, which is essential to carryout hydrolysis reaction, was added drop wise under continuous stirring. Prior to gelling the low and high resolution transmission electron microscopy (TEM) images of colloidal silica solution (also called silica sol) were taken at a magnification of 125 and 400 kX, respectively as shown in Fig. 1. The silica particles observed are almost spherical with an average cluster size of ∼5 nm. The schematic of the SiO 2 gel network [11] and the probable description of pore and particle sizes were illustrated in Fig. 1(c). The silica sol was then divided in two halves for a systematic experimentation. For the first experiment, one of the solution containing vials was allowed to gel at atmospheric pressure (1 bar) and a controlled oven temperature of ∼50℃ (±0.1℃). In the second case the solution was kept in an autoclave, subjected to high temperature and pressure at ∼150℃ and 120 bars, respectively for about 5 hours. It was intentional to keep the processing time same for both the cases. Once the wet silica gels were obtained, SiO 2 :Eu 3+ nanospheric powders were obtained by drying the gels overnight in vacuum oven at ∼55℃. All the samples were studied using morphology, composition evaluation, UV-VIS absorption and luminescence spectroscopy techniques. Fig. 1 (a) TEM and (b) HRTEM images of silica sol particles aged for 5 hours at room temperature, 25℃; (c) schematic illustration of gel network structure showing the pore and particle sizes (Courtesy from ref. [11]).
For TEM observations, the samples were redispersed in methanol by ultrasonic treatment and dropped on carbon-copper grids. TEM images were collected using a Tecnai G 2 F30 S-Twin (FEI; Super Twin lens with C S =1.2 mm) instrument operating at an accelerating voltage at 300 kV, having a point resolution of 0.2 nm and lattice resolution of 0.14 nm) with an EDAX attachment. Program Digital Micrograph (Gatan) was used for image processing. Scanning electron microscopy was performed using Zeiss EVO MA10.
X-ray diffraction (XRD) of the powder sample was performed using Bruker D-8 advance powder X-ray diffractometer with CuK α radiation operated at 35 kV and 30 mA. All the samples showed the amorphous nature.
X-ray Photoelectron Spectroscopy (XPS) studies have been carried out using a Perkin Elmer 1257 model, at 300 K with a non-monochromatic AlKα line at 1486.6 eV. During photoemission studies, small specimen charging was observed which was later calibrated by assigning the C 1s signal at 285 eV.
The room temperature photoluminescence (PL) spectra were recorded using an Edinburgh Luminescence Spectrometer (Model F900) equipped with a xenon lamp. The excitation and emission spectra were recorded in the fluorescence mode over the range of 300-700 nm.
Results and discussion
It is important to highlight the pressure-assisted hydrothermal/solvothermal process for the preparation of variety of nanoparticles of oxides and chalcogenide materials. The solution based nanomaterial synthesis often involves reactions carried out near the boiling point of the solvent. This may lead to poor quality of the nanomaterial and less yield. In order to obtain crystalline, monodisperse nanoparticles, it is always necessary to work at relatively high temperatures and pressures. Use of acid digestion bomb (commonly called Autoclave) is the best alternative to work with. Details of the autoclave synthesis of intrinsic silica gels have been reported extensively by Haranath et al. since 1996. In the current case, rare-earth (RE) doped silica gels were made under pressure at elevated temperatures as described in latter sections. This method of preparation, which is based on pressure-assisted sol-gel method, has compatibility in modifying the coordinating environment of RE (dopant) ions so that the loss in energy of the excited states via. non-radiative mechanism is minimum. The x-ray diffraction pattern of Eu 3+ doped SiO 2 gel powder depicted in Fig. 2 clearly shows a broad hump at ∼23 • indicating the complete amorphous nature of the SiO 2 nanospheres. SEM images shown in inset of Fig. 2 illustrate the quality of the nanospheres with respect to their size and shapes. In other words, the conventional sol-gel method leads to a broad distribution of particle sizes in the range 10-50 nm ( Fig. 2(a)) whereas the pressure assisted solgel method has resulted in almost uniform particles (∼15 nm) with spherical shape (Fig. 2(b)). This establishes the fact that high pressure and temperature leads to more closely packed structures and increased charge transfer energies that are efficiently transferred to the Eu 3+ ions. The TEM and scanning electron micrographs (SEM) reveal that the optimization of experimental and processing parameters allow microscopic preparation of uniform silica nanospheres. Moreover, the chemical composition and the relevant surface chemistry of SiO 2 :Eu 3+ nanospheres were analyzed using x-ray photoelectron spectroscopy (XPS). The XPS observations revealed three peaks corresponding to the elements Si, O and Eu, respectively. Pass energy for general survey scan and core level spectra was kept at 143.05 and 71.55 eV respectively. Surface contamination was removed by Ar ions with 4 keV beam energy. Sputtering performed in raster mode with emission current of 20 mA for 5min at base pressure 4.5×10 −7 torr. Figure 3 shows the survey spectra of SiO 2 :Eu 3+ nanospheres acquired in the range of 0-1200 eV. During photoemission studies, small specimen charging was observed, which was later calibrated by assigning the C 1s signal at 285 eV. Survey spectra after sputtering show sharp peaks of C 1s (285 eV), O 1s (537 eV). Two clear distinct state of Eu were observed at 1171 and 1141 eV for Eu 3d 3/2 and 3d 5/2 respectively. The separation between two states is because of spin-orbit splitting. Inset of Fig. 3 shows the Si(2p) core level spectra. The value of elemental Si(2p) is 99.15 eV, since, appearance of Si(2p) at 104 eV confirms the Si exist in SiO 2 state. The binding energies of various elements match very well with the peaks observed for standard SiO 2 :Eu 3+ . The presence of C is due to the air atmosphere and the organics used for the preparation of Eu 3+ doped in silica matrix. Figure 4 shows the UV-VIS absorbance spectra of Eu 3+ doped SiO 2 gel samples prepared at 50℃, 1 bar; and 150℃, 120 bars, respectively. The spectra are the strong indicative two findings. One is being the reduction of surface states of the gel nanospheres and other being the maintenance of almost the same size for the gel particles even after subjecting to high pressure and temperature cycles. The same has been evidenced by the absorption peaks indicated in the inset of Fig. 4. The observation of weak photoluminescence (PL) at room temperature in various amorphous silica nanostructures has been reported in the literature [15][16][17][18] but not sufficient to use for any fundamental or potential application. Figure 5 shows the PL excitation (PLE) and PL spectra from Eu 3+ doped and baked SiO 2 gel in an autoclave respectively. It is known that the surface states and unsaturated dangling bonds play a critical role in determining the overall PL characteristics of nanostructures. If the silica gel is prepared with a rare-earth dopant (Eu 3+ in the present case), the PL is dominated by the radiative transitions ( 5 D 0 → 7 F j , j=0-3) from the levels of Eu 3+ ions [19] as shown in the inset of Fig. 5. The emission spectra of Eu 3+ doped SiO 2 gels prepared at 50℃, 1 bar; and 150℃, 120 bars, were shown in Fig. 5. For recording the PL spectra the excitation energy has been fixed as 395 nm which is the 5 L 6 level of Eu 3+ ligand band. The PL from the Eu 3+ doped SiO 2 gel prepared at atmospheric pressure (1 bar) and 50℃ was found to be weak and inefficient for any practical application, whereas the PL intensity from the gel turn out to be much stronger (>10 times) when the sol was gelled under high temperature (150℃) and pressures (120 bars) inside an autoclave. Moreover, the PLE spectrum also became narrower under autoclave treated gel sample, which indicate that there is a remarkable decrease in unsatisfied chemical bonds in the final product. The most intense line at ∼611 nm corresponds to the hypersensitive transition between the 5 D 0 and 7 F 2 level of the Eu 3+ ions and will be relatively strong if the surrounding symmetry is low. In the sense, it is generally admitted that the ratio of the emission intensities R=I( 5 D 0 → 7 F 2 )/I( 5 D 0 → 7 F 1 ) is an asymmetry parameter for the Eu 3+ sites and a measure of the extent of its interaction with the surrounding ligands [20]. This indicates that the environment of the Eu 3+ is dictated by the nano-SiO 2 host under high pressure and temperature conditions. In addition, broad excitation peaks observed between 250-500 nm became distinct and sharp for the sample made under high pressures. The sharp peaks in the PL and PLE spectra of SiO 2 :Eu nanospheres may be due to quantum confinement effects related to size restrictions. A combination of unique features of high surface-to-volume ratios, monodispersion and strong photoluminescence suggest that these silica nanospheres will find many interesting applications in semiconductor photophysics, inorganic light emitting diodes, solar cells, environmental remediations and optoelectronic devices.
Conclusions
In conclusion, we have demonstrated a mechanism by which photoluminescence enhancement in SiO 2 :Eu 3+ phosphor nanospheres could be successfully achieved using pressure-assisted sol-gel method. The observed red emission is from typical 5 D 0 − 7 F 2 transition of Eu 3+ and is found to be almost ten times brighter than the gel made at atmospheric pressure (1 bar) and ∼50℃ using Stöber-Fink-Bohn process. This kind of process is highly desirable for many crystalline and noncrystalline materials system wherein the doping is inappropriate. This may lead to design many fundamental and novel optoelectronic applications.
|
v3-fos-license
|
2023-01-23T14:20:15.485Z
|
2017-11-07T00:00:00.000
|
256095091
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://stemcellres.biomedcentral.com/track/pdf/10.1186/s13287-017-0677-0",
"pdf_hash": "a1684926f9c2ee3d5e270e6e27395063ee520551",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:832",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "a1684926f9c2ee3d5e270e6e27395063ee520551",
"year": 2017
}
|
pes2o/s2orc
|
Adjudin-preconditioned neural stem cells enhance neuroprotection after ischemia reperfusion in mice
Transplantation of neural stem cells (NSCs) has been proposed as a promising therapeutic strategy for the treatment of ischemia/reperfusion (I/R)-induced brain injury. However, existing evidence has also challenged this therapy on its limitations, such as the difficulty for stem cells to survive after transplantation due to the unfavorable microenvironment in the ischemic brain. Herein, we have investigated whether preconditioning of NSCs with adjudin, a small molecule compound, could enhance their survivability and further improve the therapeutic effect for NSC-based stroke therapy. We aimed to examine the effect of adjudin pretreatment on NSCs by measuring a panel of parameters after their transplantation into the infarct area of ipsilateral striatum 24 hours after I/R in mice. We found that pretreatment of NSCs with adjudin could enhance the viability of NSCs after their transplantation into the stroke-induced infarct area. Compared with the untreated NSC group, the adjudin-preconditioned group showed decreased infarct volume and neurobehavioral deficiency through ameliorating blood–brain barrier disruption and promoting the expression and secretion of brain-derived neurotrophic factor. We also employed H2O2-induced cell death model in vitro and found that adjudin preconditioning could promote NSC survival through inhibition of oxidative stress and activation of Akt signaling pathway. This study showed that adjudin could be used to precondition NSCs to enhance their survivability and improve recovery in the stroke model, unveiling the value of adjudin for stem cell-based stroke therapy.
Background
Ischemic stroke represents the most common cause of serious morbidity and mortality, which is the second major cause of disability worldwide [1]. Few pharmacotherapies have drawn the attention of medical circles and one therapy is the recanalization of occluded vessels via thrombolysis using tissue plasminogen activator (tPA), which due to a narrow time window can only be applied to a minority of patients [2,3]. Because of limitations and complications in tPA-based treatment, restorative therapies are urgently needed to promote brain remodeling and repair once acute ischemic stroke (AIS) injury has occurred. Fortunately, stem cell-based strategies have emerged as a promising therapeutic strategy for AIS and gained increasing interest in recent years for their unique properties of action, which is the ability to abrogate subacute and chronic secondary cell death associated with the disease [4,5]. Currently, different types of stem cells are used for the treatment of ischemic stroke including neural stem cells (NSCs) [6], mesenchymal stem cells (MSCs) [7], oligodendrocyte progenitor cells (OPCs) [8], embryonic stem cells (ESCs) [9], endothelial progenitor cells (EPCs) [10,11], induced pluripotent stem cells (iPSCs) [12], vascular progenitor cells (VPCs) [7], and so forth. These stem cells could secrete * Correspondence: wlxia@sjtu.edu.cn † Equal contributors 1 various neurotrophic factors and cytokines, or differentiate into multiple cell types to compensate for I/R-induced cell death, strengthen the connection between the synapses, and establish new neural circuits to attenuate ischemic brain injury and finally improve neurobehavioral recovery [13,14]. And in the clinical study, a number of preliminary trials found that transplanting stem cells to patients between 3 days and 24 months after stroke was feasible and safe [15,16]. However, recent evidence consistently challenges this therapy on its limitations, especially the hostile microenvironment in the ischemic brain which presents a significant hurdle to the survival of transplanted cells. Hicks et al. [17] demonstrated that only 1-3% of grafted cells survived in the ischemic brain 28 days after grafting. The massive death of transplanted stem cells will hamper the application of cell-based therapy, which might be influenced by the production of reactive oxygen species (ROS) and inflammatory response mediators after I/R injury [18][19][20]. Thus finding a strategy to overcome this obstacle would potentially be of great value.
In order to resolve the problem of cell survival after transplantation, several remedial approaches have been suggested. Both preconditioned stem cells and gene modification exhibited an improvement of cell viability after transplantation [21][22][23][24]. However, although these methods showed a better transplantation outcome, some challenges still existed in using chemical factors to precondition stem cells or through modifying certain genes in stem cells. For example, lipopolysaccharide (LPS), IL-6, minocycline, and melatonin were all available factors for stem cell preconditioning, which could reduce cell death, increase stem cell proliferation and neurotrophic factor secretion, enhance cytoprotection and angiogenesis, and accelerate functional recovery in acute and subacute ischemia [25][26][27][28]. However, LPS could cause neuroinflammation, hypotension, or sepsis in pathological injury [29]; IL-6-pretreated MSCs could promote osteosarcoma growth, which suggested that IL-6 mediated the recruitment of MSCs to facilitate tumor progression [30]. So far, it has been considered that minocycline and melatonin have low-toxicity and they are biologically natural agents which were used to pretreatment cells. As for gene modification, uncontrolled expression of introduced genes could cause many adverse impacts on the body, such as leukemia, which has been attributed to insertional mutagenesis that combined with acquired somatic mutations following gene therapy of SCID-X1 patients [31]. Compared with gene modification, preconditioned stem cell therapy seemed more beneficial, simpler, and safer for ischemic stroke therapy [32]. Therefore, we wish to offer safe and effective drugs which could combine with NSCs for future clinical application.
Adjudin, 1-(2,4-dichlorobenzyl)-1H-indazole-3-carbohydrazide, formerly called AF-2364, is a reversible antispermatogenic compound, which is under development as a potential nonhormonal male contraceptive that can disrupt the adherens junction of germ cells to Sertoli cells without affecting testosterone production [33]. Adjudin is a small molecular derivative of indazole and is an analog of the chemotherapy drug lonidamine which had been demonstrated to have no apparent side effects in treated animals [33]. It has also been reported that many indazole derivatives are nonsteroidal anti-inflammatory drugs (NSAID) which could suppress the production of prostaglandin E2 (PGE2) synthesis and nitric oxide (NO) and the release of cytokines and chemokines [34]. Our previous results demonstrated that adjudin could protect against cerebral I/R injury by inhibition of neuroinflammation and blood-brain barrier (BBB) disruption through intraperitoneal injection [35]. We also found that adjudin could attenuate LPS-induced BV2 activation by suppression of the NF-κB pathway [36], which showed that adjudin appears to be a promising neuroprotective agent for ischemic stroke therapy. In this study, we aimed to examine whether adjudin pretreatment on NSCs could have a better effect on neuroprotection compared with nonpreconditioned NSCs after I/R injury.
Cell culture and characterization
All animal experimental protocols were approved by the Institutional Animal Care and Use Committee (IACUC) of Shanghai Jiao Tong University, Shanghai, China (Permission number: Bioethics 2012022). NSCs were harvested from the cortex of the E14 green fluorescent protein (GFP)-transgenic mice (Animal Research Center of Nanjing University, Nanjing, China). In brief, bilateral cortex zones from mouse brains were dissected in HBSS and dissociated mechanically. The cells were collected and resuspended in DMEM/F12 (1:1) medium (Gibco, Carlsbad, CA, USA) containing B27 supplement (Gibco), L-glutamine (Sigma-Aldrich), 20 ng/ml mouse basic fibroblast growth factor (Gibco), and 20 ng/ml mouse epidermal growth factor (Gibco). Cells were monolayer cultured on a 60-mm plastic dish (Corning Incorporated, Corning, NY, USA) precoated with poly-L-ornithine hydrobromide (Sigma, St Louis, MO, USA) and laminin (Sigma) at 37°C with 5% CO 2 in an incubator (Thermo Scientific, Barrington, IL, USA). The medium was changed every 2 days and cells were passaged in about 5 days. Cells that had been passaged three to five times were used for the experiments, which strongly maintained their proliferation and differentiation ability.
Adjudin pretreatment of NSCs
The NSCs were preconditioned with adjudin before the invitro experiments or transplantation. Adjudin was added to the cell culture medium with a final concentration of 0, 5, 10, 30, or 60 μM for 24 hours, followed by drug washout before experiments. Cell death was quantified by a standard measurement of lactate dehydrogenase (LDH) release assay as described previously [36]. Cell viability was assessed by CCK-8 assay kit (Dojindo Laboratories, Kumamoto, Japan). Data were acquired using a microplate reader (Synergy2; BioTek, Winooski, VT, USA).
Cell death and cell survival analysis in vitro
To evaluate NSC viability under oxidative stress, NSCs were seeded at a density of 1 × 10 5 or 1 × 10 4 cells per well in 24-well culture plates or 96-well culture plates (Corning) respectively and subjected to different concentrations of H 2 O 2 (0.05, 0.1, 0.3, 0.5 mM; Sigma) for 1 hour. NSCs were then washed three times with phosphate-buffered saline (PBS) and cultured for another 24 hours in highglucose DMEM with 10% FBS. These NSCs were then examined by LDH assay and CCK-8 assay kit.
To determine the effect of adjudin on NSC viability under oxidative stress, NSCs were pretreated by adjudin with the concentration of 10 or 30 μM for 24 hours. The cells were then washed three times with PBS and subjected to 0.1 mM H 2 O 2 for 1 hour followed by LDH assay and CCK-8 assay 24 hours later.
ATP assay
ATP levels were quantified using the Roche ATP Bioluminescence Assay Kit (HS II; Indianapolis, IN, USA) following the standard protocol provided by the vendor. In brief, cells were washed once with PBS and lysed with the Cell Lysis Reagent for 15 min. Then 50 μl of the homogenates were mixed with 150 μl of the Luciferase Reagent, and the luminescence was detected using a microplate reader (Syn-ergy2). The protein concentrations of the samples were quantified with bicinchoninic acid (BCA) protein assay (Pierce, Rockford, IL, USA). The ATP concentrations of the sample were calculated using an ATP standard, and normalized against the protein of the samples.
Cell proliferation and differentiation in vitro
To evaluate NSC proliferation and differentiation after treatment with adjudin in vitro, NSCs were monolayer cultured on poly-L-ornithine hydrobromide (Sigma) and laminin (Sigma)-coated glass cover slips in a 24-well plate (Corning). After pretreatment with adjudin at concentrations of 10 or 30 μM for 24 hours, NSCs were washed with fresh medium to remove the drug. Then 3 days later, cells were immunostained with mouse anti-Nestin (Millipore), goat anti-Sox2 (Santa Cruz Technology), rabbit anti-glial fibrillary acidic protein (GFAP) (Millipore), mouse anti-Doublecortin (Santa Cruz Technology), and rabbit anti-Ki67 (1:200; Abcam, Cambridge, MA, USA).
Transient middle cerebral artery occlusion model
Focal cerebral ischemia in mice was performed as described previously [35]. In brief, adult male ICR mice weighing 25-30 g were anesthetized with ketamine/xylazine (100 mg/10 mg/kg; Sigma) intraperitoneally. Body temperature was maintained at 37 ± 0.5°C using a heating pad (RWD Life Science, Shenzhen, China). Under the surgical microscope (Leica, Solms, Germany), the left common carotid artery (CCA), the external carotid artery (ECA), and the internal carotid artery (ICA) were isolated. Then a 6-0 suture (Dermalon, 1741-11; Covidien, OH, USA) with a round tip and coated with silicone was inserted from the ECA into the ICA and reached the circle of Willis to occlude the origin of the middle cerebral artery (MCA) until a slight resistance was felt. The distance from the furcation of the ECA/ICA to the opening of the MCA was 9 ± 0.5 mm. The success of occlusion was determined by monitoring the decrease in surface cerebral blood flow to 80% of baseline, which was verified by a laser Doppler flow-meter (Moor LAB; Moor Instruments, Devon, UK). Reperfusion was performed by withdrawing the suture 2 hours after middle cerebral artery occlusion (MCAO). To confirm successful occlusion/reperfusion, cerebral blood flow was tested again. The sham operated mice were subjected to the same procedure except for the suture insertion.
NSC transplantation
Twenty-four hours after transient middle cerebral artery occlusion (tMCAO), mice were divided randomly into three groups for NSC or vehicle injection: PBS group, NSC group, and adjudin-pretreated group. The animals were anesthetized with ketamine/xylazine intraperitoneally, and received stereotaxic transplantation. Adjudin-pretreated or untreated NSC suspension with 1 × 10 6 cells in 5-15 μl PBS was injected into the striatum of the ipsilateral hemisphere in mice, with the following coordinates: M-L, −1.5 mm; D-V, −3.25 mm. The same amount of PBS was injected as control. Deposits were delivered at 0.5 μl/min and the needle was left in situ for 5 min post injection before being removed slowly. The wound was then closed and the animal was returned to the cage for follow-up experiments.
Behavioral assessment
Three days after tMCAO, modified neurological severity scores (mNSS) were assessed by an investigator who was blind to the treatment regimen to assess the neurological status of the animals, which is a composite of motor, reflex, and balance tests (normal score, 0; maximal deficit score, 14) as described previously [37]. Total neurological score was calculated as the sum of scores on limb flexion (range 0-3), walking gait (range 0-3), beam balance (range 0-6), and reflex absence (range 0-2).
The rotarod test required mice to balance on a rotating rod. Mice were given 1-min adaption on the rod, which were then accelerated up to 40 rpm within 2 min. The duration of mice remaining on the rotating rod was recorded. Mice were examined at various time points (≤35 days) after NSC transplantation.
Measurement of infarct volume
Mice from each group were sacrificed 3 days after cell transplantation. Following PBS solution perfusion, mouse brains were perfused with 4% paraformaldehyde (PFA) and brains immediately removed and frozen in prechilled isopentane and stored at −80°C. The tissues were then cut into a series of 20-μm-thick coronal sections from the beginning of the infarct area to the end, and one section out of every 10 was collected on the same slide to have a representative cerebral injury with the distance between adjacent sections of 200 μm. The entire set of brain sections was immersed in 0.1% cresyl violet (Sinopharm Chemical Reagent Co., Shanghai, China) for 30 min and then rinsed in distilled water for 10 min. The infarct area in each section was calculated using NIH ImageJ software by the following formula: Infarct area mm 2 À Á ¼ contralateral hemisphere area mm 2 À Á ipsilateral undamaged area mm 2 À Á : The infarct volume between two adjacent sections was calculated by the following equation: where S1 and S2 are the infarct areas of the two sections and h is the distance between them. The total infarct volume was calculated by the sum of all infarct volume from each pair of adjacent sections [38].
Immunohistological staining
Cultured NSCs or brain sections (20 μm in thickness) were fixed with absolute methanol in a −20°C freezer for about 10 min and then washed three times in PBS, and the slices were blocked in 10% normal donkey serum (Jackson ImmunoResearch, West Grove, PA, USA) for 30 min at RT. Cryosections were incubated with one of the following primary antibodies in 1% of the blocking serum at 4°C overnight: mouse anti-CD11b antibody (1:100; BD Biosciences, San Jose, CA, USA), rabbit anti-Occludin (1:100; Invitrogen, Carlsbad, CA, USA), rabbit anti-ZO-1 (1:100; Invitrogen), and goat anti-CD31 antibodies (1:100; R&D Systems, Tustin, CA, USA). After being washed three times with PBS, sections were incubated with Alexa-488-conjugated secondary anti-body (1:500 dilution; Life Technologies, CA, USA) containing 1% normal donkey serum at RT for 1 hour in darkness, and nuclei were stained with 4,6-diamidino-2-phenylindole (DAPI) (1:500 dilution; Beyotime Institute of Biotechnology, China) for 10 min. After washing with PBS, slides were mounted with antifade mounting medium (Beyotime) and images were acquired under a Leica upright microscope (Leica DM2500) or a confocal laser-scanning microscope (Leica TCS SP5 II). IgG detection in the brain parenchyma was used to indicate the integrity of BBB. These brain sections were incubated with donkey anti-mouse IgG conjugated with biotin (1:500; Life Technologies), and visualized by adding with avidin-Alexa Fluor 488.
Western blot analysis
Tissue samples were collected from the striatum and cortex of the ipsilateral hemisphere, and sheared, briefly processed ultrasonically, and lysed in lysis buffer (Thermo Scientific, Rockford, IL, USA) containing Complete Protease Inhibitor Cocktail, Phosphatase Inhibitor Cocktail, and 2 mM phenylmethylsulfonyl fluoride (PMSF). The lysates were centrifuged at 12,000 rpm for 20 min at 4°C, and the supernatants were collected. Immunoblotting was carried out as described previously [39]. A BCA assay kit (Pierce) was used for total protein quantification. Total proteins (40 μg) were denatured at 95°C for 5 min and electrophoresed through 10 or 6% (for ZO-1) SDS-PAGE and then electrotransferred to 0.45-μm nitrocellulose membranes (Whatman, Piscataway, NJ, USA). Membranes were then blocked with 5% skim milk for 1 hour at RT and incubated with primary antibody solutions respectively at 4°C overnight. After four washes in TBST, the membranes were hybridized with appropriate HRP-conjugated secondary antibody (1:5000; Jackson) for 1 hour at RT and washed four times with TBST again. The final detection was visualized using enhanced chemiluminescence (ECL) (Thermo Scientific, Rockford, IL, USA). Western blotting reagents and images were captured using the ChemiDoc XRS system (BioRad, Hercules, CA, USA). Loading differences were normalized using an anti-actin antibody with
Evans Blue extravasation
Mice were anesthetized with ketamine/xylazine, and then 4 ml/kg of 2% Evans Blue (Sigma) in normal saline was injected through the left jugular vein at 3 days following tMCAO. After 2 hours of circulation, the mice were anesthetized and perfused with normal saline. The ipsilateral and contralateral hemisphere of the mice were removed and weighed. Then EB was extracted by homogenizing the samples in 1 ml of 50% trichloroacetic acid solution followed by centrifuging at 12,000 rpm for 20 min. The supernatant was diluted with 100% ethanol at a ratio of 1:3. The amount of EB was determined quantitatively by measuring the 610 nm absorbance of the supernatant (BioTek, Winooski, VT, USA).
CD31/BrdU double immunostaining
Brains were post-fixed for 24 hours followed by 48 hours of immersion in 30% sucrose in PBS and immediately frozen, and then sectioned using a freezing microtome (Leica, Solms, Germany). A thickness of 20-μm coronal sections was cut. Floating coronal sections were collected in antigen protective solution, which includes 20% glycol, 30% glycerol, and 50% PBS. Sections were first treated with 2 mol/L HCl for 20 min at 37°C and then neutralized with sodium borate twice each for 10 min. Sections were then treated with 0.3% Triton-X 100 in PBS for 15 min, blocked by 10% BSA, and incubated with CD31 (1:200; R&D) and BrdU (1:50; Santa Cruz) antibody at 4°C overnight. Finally, the sections were incubated with secondary antibodies (1:500; Thermo Fisher) for 60 min at room temperature. Stained sections were mounted after rinsing.
Statistical analysis
Each experiment was repeated at least three times. All data are presented as mean ± SEM. Data were analyzed by a one-way ANOVA, followed by Tukey's honest significant test using the GraphPad InStat (GraphPad Software Inc., La Jolla, CA, USA). P < 0.05 was considered statistically significant.
NSC culture and characterization
Neural stem cells were generated from the cortex of E14 mice and characterized by immunocytochemistry. A small proportion of the primary cells generated neurospheres after 7 days of initial culture (Fig. 1a). When NSCs were cultured on poly-L-ornithine hydrobromide and laminin-coated plates, they were able to grow as monolayers with adherence to the plate (Fig. 1b). Immunostaining analysis showed that cells were Nestin + and Sox2 + while GFAPand DCX - (Fig. 1c-f ), suggesting that the majority of the cells in the culture maintained a stem cell phenotype.
Differentiation and proliferation of NSCs after pretreatment with adjudin
In order to explore whether adjudin could affect the differentiation and proliferation of NSCs, cells were cultured in a monolayer and pretreated with adjudin at a concentration of 10 or 30 μM. The results of immunostaining indicated that NSCs under the two concentrations of adjudin pretreatment were positive for Nestin and Sox2, and negative for DCX, whereas GFAP was negative for 10 μM and positive for 30 μM pretreated NSCs (Additional file 1: Figure S1a). Fluorescent photomicrographs of Ki67 showed that 10 μM adjudin did not affect the proliferation of NSCs, but 30 μM adjudin could apparently inhibit NSC proliferation (Additional file 1: Figure S1b). Both results suggest that 10 μM adjudin had no effect on the differentiation and proliferation of NSCs.
Adjudin preconditioning improved the survival of and maintained the ATP level of NSCs under H 2 O 2 stress
To evaluate whether adjudin preconditioning could reduce NSC death under stress in vitro, we used hydrogen peroxide oxidative stress models. We first investigated the effect of different concentrations of adjudin and H 2 O 2 on the cell viability of NSCs in order to establish the working concentration of adjudin and H 2 O 2 . After pretreatment of adjudin for 24 hours, the LDH assay revealed that adjudin could not induce cell death even at 60 μM (Additional file 2: Figure S2a), but the results of the CCK-8 assay showed that 30 and 60 μM adjudin could significantly decrease cell viability, while 5 and 10 μM adjudin had no effect on this (Additional file 2: Figure S2b). Combining with the immunostaining results of Ki67 (Additional file 1: Figure S1b), we inferred that this was because high concentrations of adjudin could inhibit cell proliferation instead of decreasing NSC viability. As shown in Additional file 2: Figure S2b, treatment with H 2 O 2 reduced NSC viability significantly in a concentrationdependent manner. The optimal concentration of H 2 O 2 for subsequent experiments was determined to be 0.1 mM H 2 O 2 because cell viability was 40-50% at this concentration (Additional file 2: Figure S2c, d). After 1 hour of 0.1 mM H 2 O 2 stimulation, cells were replenished with fresh medium and cultured for another 24 hours, to be followed with the LDH assay and CCK-8 assay, which revealed that adjudin-preconditioned NSCs (10 and 30 μM) had a significant reduction in death and an increase in cell survival, compared with the nonpreconditioned NSCs (Fig. 2a, b). This cytoprotective effect was supported by the ATP assay as adjudin pretreatment could maintain the ATP level of NSCs after H 2 O 2 stimulation (Fig. 2c). The serine/threonine kinase Akt, which is a conserved family of signal transduction enzymes, not only plays a pivotal role in the cell death/survival pathway [40,41] but also plays an important part in regulating inflammatory responses and apoptosis [42]. Here we used western blot analysis to estimate the activity of the Akt signaling. Compared to the nonpreconditioned NSCs, adjudin pretreatment could dramatically increase the ratio of p-Akt/Akt after H 2 O 2 stimulation (Fig. 2d, e).
Adjudin preconditioning upregulated antioxidant genes and reduced oxidative stress in vitro
We next sought to elucidate the underlying mechanism of adjudin-induced cytoprotection. As exogenous H 2 O 2 could induce a strong increase in intracellular ROS levels within 1 hour of cell treatment [43], we investigated the expression of iNOS and several antioxidant genes using RT-PCR and western blot analysis. Real-time RT-PCR assays showed that adjudin preconditioning significantly inhibited iNOS level (Fig. 3a) and upregulated expression of catalase (Fig. 3b), SOD2 (Fig. 3c), and GCLC (Additional file 3: Figure S3a) after 1 hour of H 2 O 2 stimulation followed by 12 hours of reculture, whereas it did not change NOX4, HO-1, NQO1, and Nrf2 levels (Additional file 3: Figure S3b-e). This was also supported by western blot analysis of the whole cell lysate from the NSCs, showing that adjudin significantly lowered protein expression of iNOS and induced higher levels of catalase and SOD2 after 1 hour of H 2 O 2 stimulation followed by 24 hours of normal condition culture (Fig. 3d-g). This finding suggested that resistance to oxidative stress is one of the mechanisms of adjudin-induced cytoprotection.
Adjudin preconditioning promoted expression of neurotrophic factors in vitro
Because NSCs could secrete many neurotrophic factors and other soluble molecules to modify the release of inflammatory mediators and oxidative reaction [13,27,44], we tested whether adjudin changed their expression in NSCs in vitro. Significantly higher gene expression of BDNF, nerve growth factor (NGF), and glial cell-derived neurotrophic factor (GDNF) was detected in the adjudinpreconditioned NSC group after 1 hour of H 2 O 2 stimulation and 12 hours of reculture, compared with the nonpreconditioning NSC group (Fig. 4a-c).
Adjudin preconditioning reduced brain infarct volume and improved neurobehavioral outcome after ischemia/ reperfusion Twenty-four hours after tMCAO, mice were divided randomly into three groups for NSC or vehicle injection: PBS group, NSC group, and adjudin-pretreated NSC group. NSCs (1 × 10 6 cells suspended in PBS) that was pretreated with or without adjudin or untreated were injected into the striatum of the ipsilateral hemisphere in mice. Brain infarct volume was determined by cresyl violet staining 3 days after cell transplantation (Fig. 5a). Adjudin-pretreated NSCs greatly reduced infarct volume by as much as 50% compared to the PBS group, whereas untreated NSCs only produced~30% reduction in the infarct volume compared to the PBS group (Fig. 5b). Meanwhile, adjudin preconditioning improved behavioral performance with the neuroscore plummeting by approximately 50% in comparison to the PBS group, while untreated NSCs resulted in only a 25% decrease in neuroscore (Fig. 5c). These findings illustrate that adjudin pretreatment could significantly attenuate I/R-induced cerebral injury. Moreover, compared to the untreated NSCs, PBS, and sham groups, the adjudin preconditioning group considerably increased the ratio of p-Akt/Akt both in the cortex and the striatum (Fig. 5d-g).
Adjudin preconditioning reduced cytokine production and attenuated microglial activation after ischemia/ reperfusion
To investigate whether adjudin-pretreated NSCs in the acute phase of cerebral ischemia had a better effect on immunomodulatory influence, we first examined IL-6, IL-1β, and TNF-α mRNA expression in both the cortex and the striatum. The results showed that IL-6, IL-1β, and TNF-α mRNA were increased dramatically at day 3 following tMCAO. The expression of the three cytokines decreased significantly in the untreated NSC group compared to the PBS group, and in the adjudin-pretreated NSC group, further reduction in their expression was found (Fig. 6a-f). As the resident immune cells in the central nervous system (CNS), microglia could be activated by I/R injury, which could regulate the primary events of neuroinflammatory responses [45]. We then investigated whether adjudin preconditioning also affected microglia in the Quantification of densitometric value of the protein bands normalized to the respective β-tubulin (e-g). Bars represent mean ± SEM from three independent experiments. *P < 0.05, **P < 0.01, ***P < 0.001 Fig. 4 Induction of neurotrophic factors with adjudin preconditioning in vitro. Real-time RT-PCR assays of NSCs. Relative mRNA expressions of BDNF, NGF, and GDNF normalized to Rplp0 (a-c). Bars represent mean ± SEM from three independent experiments. *P < 0.05. BDNF brain-derived neurotrophic factor, GDNF glial cell-derived neurotrophic factor, NGF nerve growth factor tMCAO model. A CD11b signal, an indicator of active microglia, was revealed by fluorescence microscopy (Fig. 6g, h). In the sham group, no obvious activation of microglia and CD11b signal were detected (Fig. 6g top left panel). In the PBS group, strong staining of CD11b was widely found in the ipsilateral hemisphere (Fig. 6g top right panel). Contrarily, stereotactic injection of nonpreconditioned NSCs after reperfusion significantly inhibited the activation of microglia (Fig. 6g bottom left panel). Moreover, adjudin-pretreated NSCs could further inhibit microglia activation, where much less CD11b signal was detected (Fig. 6g bottom right panel). Statistical analysis of the CD11b signal from brain sections of mice indicated that adjudin preconditioning significantly attenuated microglial activation in the ipsilateral region of the brain after I/R injury (Fig. 6h).
It has been reported that the dynamic changes of M1/M2 macrophage activation are involved in CNS damage and regeneration. M1/M2 macrophage polarization also plays an important role in controlling the balance between promoting and suppressing inflammation [20,48]. Here we stained Adjudin-pretreated NSCs reduced brain infarct volume and improved neurobehavioral outcome after I/R. Representative sets of cresyl violet staining of brain sections from mice treated with PBS, untreated NSCs, and adjudin-pretreated NSCs 3 days following tMCAO. Dashed line shows the border of the infarct area (a). Quantification of infarct volumes (b). n = 8 in each group. Adjudin-pretreated NSCs significantly ameliorated neurological deficits 3 days after transplantation when compared to the PBS or NSC group. n = 14 for PBS and untreated NSC group, and n = 19 for adjudin-pretreated NSC group (c). Adjudin-pretreated NSCs promoted the phosphorylation of Akt in ipsilateral cortex (d) and striatum (e) after tMCAO. Representative western blot assay showing that adjudin increased the p-Akt protein level 3 days after tMCAO compared with sham, PBS, and NSC groups. Quantification of densitometric value of the protein bands of cortex and straitum normalized to total Akt (f, g). n = 6 in each group. Data are mean ± SEM. *P < 0.05, **P < 0.01, ***P < 0.001. NSC neural stem cell, PBS phosphate-buffered saline CD16 and Arg-1 to check MI/M2 microglial activation. The results revealed that ischemic brain damage could prominently activate both M1 and M2 microglia compared with sham groups (Fig. 7a, d). Furthermore, when comparing adjudin-pretreated NSC groups with nonpretreated NSC groups, we found that adjudin pretreatment could significantly suppress the expression of M1 microglia and promote M2 microglia expression (Fig. 7a-f).
Adjudin preconditioning attenuated oxidative stress after ischemia/reperfusion
Since ROS also plays an important role in cerebral I/ R injury, we then investigated the effect of adjudin preconditioning on resistance to oxidative stress. Compared with the PBS and NSC groups, the iNOS mRNA level was significantly decreased in the adjudin preconditioning group both in the cortex and striatum Fig. 6 Adjudin-pretreated NSCs inhibited cytokine production and activation of microglia after I/R. Relative mRNA expression of IL-6, IL-1β, and TNF-α normalized to Rplp0 detected 3 days following cell transplantation. Expression of IL-6 (a, d), IL-1β (b, e), and TNF-α (c, f) in ipsilateral cortex and striatum shown in the NSC and adjudin-pretreated NSC groups. n =6 in each group. Immunofluorescence staining for CD11b (green) in the sham group, and tMCAO groups with either PBS injection, NSC injection, or adjudin-pretreated NSC injection. Samples were acquired 3 days after cell transplantation, with DAPI staining for contrast (g). Scale bar =100 μm. Quantification of CD11b immunofluorescence intensity in each group (h). n = 8 in each group. Data are mean ± SEM. *P < 0.05, **P < 0.01, ***P < 0.001. DAPI 4,6-diamidino-2-phenylindole, NSC neural stem cell, PBS phosphate-buffered saline (Fig. 8a, d). Also, the expression of antioxidant genes catalase (Fig. 8b, e) and SOD2 (Fig. 8c, f) was apparently increased in the adjudin pretreatment group after I/R injury. Western blot analysis of whole cell lysate from the ipsilateral cortex and striatum also supported these results, which showed that adjudin preconditioning dramatically decreased iNOS protein expression and promoted higher levels of catalase and SOD2 3 days after I/R (Fig. 8g-n). (g, h). Quantification of densitometric value of the protein bands of cortex (i-k) and striatum (l-n) normalized to the respective β-tubulin. n = 6 in each group. Data are mean ± SEM. *P < 0.05, **P < 0. 01, ***P < 0. 001. NSC neural stem cell, PBS phosphate-buffered saline
Adjudin preconditioning enhanced neuroprotection after tMCAO via p-38 and JNK but not the ERK signaling pathway
To assess the phosphorylation status of the MAPK signaling pathway, western blot analysis was used. We demonstrated that I/R significantly increased p38, JNK, and ERK1/2 phosphorylation levels in the cortex and striatum compared with sham, and this induction could be inhibited after transplantation of NSCs (Fig. 9a, c). However, compared with the nonpretreated NSC group, the adjudin preconditioning group had a more profound effect on inhibiting the phosphorylation level of p38 and JNK in the cortex (Fig. 9a, b, d, e), while ERK1/2 (Fig. 9c, f) phosphorylation had no detectable changes after transplantation between groups. No significant differences were observed in the expression of total ERK1/2, total JNK1/2, and total p38 MAPKs among all experimental groups. Therefore, the results indicated that I/R induced inflammatory cytokines and oxidative stress by activating the p38 and JNK pathway but not the ERK signaling pathway.
Adjudin preconditioning attenuated ischemia/ reperfusion-induced blood-brain barrier leakage
The permeability of BBB after ischemic brain injury was assessed by measuring the extravasation of EB and IgG protein, which could not leak to the brain parenchyma through the BBB in the normal physiological state. We demonstrated that a tremendous amount of EB and IgG were detected in the ipsilateral hemisphere of the PBS group, while NSCs remarkably reduced the EB and IgG leakage, and adjudin-pretreated NSCs could further decrease the leakage of EB and IgG, which indicated that BBB integrity was even better protected by adjudinpretreated NSCs (Fig. 10a-d). Meanwhile, we also found in the sham group that no EB dye or IgG signal was detected in the same brain regions (Fig. 10a-d). To investigate the mechanism of BBB disruption, we analyzed the localization of tight junction (TJ)-related proteins ZO-1 and occludin in cerebral vascular structures by immunofluorescence microscopy in conjunction with CD31, an endothelial marker that also locates at the BBB, and by western blot analysis to determine the change of the protein levels. Confocal microscopy analysis showed that ZO-1 and occludin positive staining were continuously located on the endothelial cell margin of cerebral microvessels in the sham group, while this continuity was disrupted after I/R injury by forming many gaps along the microvessels (Fig. 10e). However, this process could be reversed by stereotactic injection of NSCs, and compared with the nonpretreated NSC group, adjudin preconditioning could further lessen gap formation after tMCAO (Fig. 10e). To corroborate this result, western blot analysis of lysates from the ipsilateral region was adopted. We found that the significant reduction of ZO-1 and occludin levels after I/R (PBS versus sham) could be rescued by NSC transplantation and adjudin preconditioning had a better effect on protecting against the protein reduction of ZO-1 and occludin after I/R injury (Fig. 10f ). Together, these results further demonstrated that the BBB destruction after I/R injury could be effectively rescued by adjudin-pretreated NSCs.
Adjudin preconditioning enhanced the secretion of neurotrophic factors after ischemia/reperfusion
To evaluate the ability of NSCs to secrete neurotrophic factors, we measured BDNF levels in both the cortex and striatum of the ipsilateral hemisphere using RT-PCR and western blot analysis 3 days after ischemia and transplantation. Real-time RT-PCR assays showed that these paracrine factors significantly increased in the adjudinpretreated NSC group compared with the nonpretreated NSC group and the PBS group (Fig. 11a, b), which were also confirmed by western blot analysis (Fig. 11c-f).
Adjudin preconditioning promoted angiogenesis and enhanced neurobehavioral recovery after ischemia/ reperfusion
Ischemic angiogenesis directly relates to reestablishment of microcirculation within the I/R damaged area and represents a key vital process for poststroke functional recovery [49,50]. Since angiogenesis could modulate the endogenous angiogenic response to generate new vessels and then increase blood supply which is necessary for new neuronal survival and development, angiogenesis is directly linked to neurogenesis [51,52]. Here we further measured EPC marker CD31 and 5-bromo-2′-deoxyuridine (BrdU) double-positive cells to evaluate angiogenesis 35 days after transplantation (Fig. 12a, b). The staining results showed that nonpreconditioned NSCs could significantly increase new vessel generation compared with the PBS group ( Fig. 12a top panel, b), while the adjudin-pretreated NSCs had a more remarkable effect on angiogenesis (Fig. 12a bottom left panel, b). To evaluate the effect of adjudin pretreatment on functional recovery, rotarod test was performed at different time points (≤5 weeks) after cell transplantation.
The rotarod maintaining time declined sharply after tMCAO surgery compared with the nonoperative group (Fig. 12c), while the maintaining time was significantly prolonged in surgery groups 7, 14, and 35 days after cell transplantation (Fig. 12c). In each tMCAO group, we found that the functional recovery effects were in accordance with angiogenesis results. NSC transplantation could significantly increase the rotarod maintaining time compared with the PBS group, and adjudin-pretreated NSCs showed even better effects (Fig. 12c).
Discussion
In this study, we showed that, compared with nonpreconditioned NSCs, adjudin preconditioning not only enhanced the survival rate of NSCs under H 2 O 2 oxidative stress in vitro, but also had a better effect on decreasing infarct volume, improving behavioral outcome, inhibiting neuroinflammation and oxidative stress, maintaining BBB integrity, and expressing higher levels of neurotrophic factors, resulting in stronger therapeutic effects in I/R-induced brain injury. Such a neuroprotective effect was mediated by inhibiting activation of the p38 and JNK MAPK signaling pathway. Together our results suggested the potential of using adjudin for NSC transplantation, and provided preclinical experimental evidence for the combination therapy of adjudin and NSCs after stroke.
Because of the complexity of the ischemic cascade, which includes various mechanisms of excitotoxicity (glutamate release and receptor activation), calcium influx, ROS scavenging, NO production, inflammatory reactions, and apoptosis, numerous molecular targets have been tackled in order to achieve neuroprotection [53,54]. Since the majority of patients continue to exhibit neurological deficits even following successful thrombolysis and therapy, restorative therapies are urgently needed to promote brain remodeling and repair once stroke injury has occurred. Stem cell transplantation therapy has emerged as a promising regenerative medicine for ischemic stroke which could promote tissue repair and functional recovery via potent immune modulatory actions, trophic support enforcement, and cell replacement mechanisms [13,55]. However, a number of issues and problems remain unresolved and need specific attention in order to develop clinical treatments successfully. These include an appropriate cell source in consideration of therapeutic value and ethical concerns, cell type-specific differentiation, and survival of transplanted cells in the harsh pathological microenvironment [16]. Massive death of donor cells in the infarcted area during acute phase immensely lowers the efficacy of Fig. 9 Adjudin-pretreated NSCs inhibited phosphorylation of p-38 and JNK after tMCAO. P-p38, p-38, p-JNK, JNK, p-ERK, and ERK levels in sham, PBS, nonpretreated NSC, and adjudin-pretreated NSC groups at 3 days after cell transplantation in the ipsilateral cortex (a-f) and striatum (g-l). Quantification of densitometric values of the protein bands normalized to total p38, JNK, and ERK1/2 (d-f, j-l). n = 6 in each group. Data are mean ± SEM. *P < 0.05, ***P < 0.001. NSC neural stem cell, PBS phosphate-buffered saline the procedure [17]. In order to improve the effect of stem cell-based therapy, various strategies are discussed which have been adopted to develop and optimize the protocols to enhance donor stem cell survival post transplantation, with special focus on the preconditioning approach [56].
Up to now, a number of preconditioning triggers have been tested in stem cell-based therapy, such as ischemia, hypoxia, H 2 O 2 , erythropoietin (EPO), insulin-like growth factor-1 (IGF-1), pharmacological agents, and so forth, which have shown that exposure of stem cells to sublethal (See figure on previous page.) Fig. 10 Adjudin-pretreated NSCs lessened Evans blue and IgG extravasation and inhibited ZO-1 and occludin degradation. Photographs represent the perfused brains after EB injection (a). Quantification of extravasated EB dye. The dye was analyzed by a spectrophotometer at 610 nm (b). n =14 for PBS and untreated NSC groups, and n = 19 for adjudin-pretreated NSC group. Immunofluorescence staining for IgG (red) in sham, PBS, nonpretreated NSC, and adjudin-pretreated NSC groups at 3 days after cell transplantation, with DAPI staining for contrast (c). Scale bar =100 μm. Quantification of the IgG fluorescent intensity in each group (d). n = 8 in each group. Sections from ischemic penumbra were stained for ZO-1 (green) and occludin (green), and then costained with endothelial marker CD31 (red) (e). Nuclei were stained with DAPI. Scale bar = 100 μm.
Representative western blot analysis for ZO-1 and occludin protein levels in the ischemic penumbra from sham, PBS, nonpretreated NSC, and adjudin-pretreated NSC groups at 3 days after cell transplantation (f). Quantification of densitometric values of the protein bands normalized to the respective β-tubulin and actin (g). n = 6 in each group. Data are mean ± SEM. *P < 0.05, **P < 0.01, ***P < 0.001. NSC neural stem cell, PBS phosphate-buffered saline Fig. 11 Adjudin-pretreated NSCs upregulated expression of neurotrophic factors. Relative mRNA expression of BDNF, NGF, and GDNF normalized to Rplp0 in cortex (a) and striatum (b) from sham, PBS, nonpretreated NSC, and adjudin-pretreated NSC groups at 3 days after cell transplantation. Western blot analysis of BDNF in cortex (c, d) and striatum (e, f) from sham, PBS, nonpretreated NSC, and adjudin-pretreated NSC groups at 3 days after cell transplantation. Quantification of densitometric values of the protein bands normalized to β-tubulin. n = 6 in each group. Data are mean ± SEM. *P < 0.05, ***P < 0.001. BDNF brain-derived neurotrophic factor, NSC neural stem cell, PBS phosphate-buffered saline hypoxia or other preconditioning insults increased the tolerance of these cells to multiple injurious insults and thus protected them against the harsh environment after transplantation [27,[57][58][59][60][61].
Many studies have already illustrated that NSC therapy has great potential to restore neurological function after ischemic brain injury [6,14], and here we likewise demonstrated the neuroprotection effect of NSCs which attenuated infract volume and improved the outcome of behavioral recovery after stroke and transplantation onward. In our study, we found that the MAPK signaling pathway as one of the underlying mechanisms of stem cell function, was dramatically inhibited 3 days after NSC transplantation. Our results showed that NSC transplantation could inhibit the activation of p-ERK1/2, p-JNK1/2, and p-p38 MAPKs which could significantly increase after I/R injury in comparison with that of sham-operated animals. MAPK signaling pathways are not only implicated in inflammatory and apoptotic processes of cerebral I/R injury, but are also involved in the proliferation, survival, and cell fate determination (neurogenesis vs gliogenesis) of NSCs that depend on the extrinsic factors regulated by different MAPKactivated transcription factors, or interacted with other signaling pathways [62,63]. MAPKs are activated after focal cerebral I/R, which mainly function as mediators of cellular stress by phosphorylating intracellular enzymes, transcription factors, and cytosolic proteins involved in cell survival, inflammatory mediator production, and apoptosis [64,65]. Kyriakis and Avruch showed that the presence of JNK and p38 MAPKs had an effect on cell injury, unlike the ERK signaling that was part of the survival route [64]. Cumulative experimental evidence showed that p38 and JNK MAPKs could be activated in neurons, microglia, and astrocytes after various types of ischemia [66][67][68][69], and their activation was associated with the production of proinflammatory cytokines, such as TNF-α and IL-1β, which tend to act as perpetrators in the CNS injury [70,71]. A growing body of evidence showed that inhibition of p38 or JNK MAPK activation using inhibitors or knockout mice could provide protection in a variety of brain injury models [72][73][74][75]. However, phosphorylation of ERK occurred at different time intervals after I/R injury. Whether the activation of ERK was associated with neuronal protection or damage in ischemic brain remains to be determined unequivocally [76]. From our experiments, we found that adjudin preconditioning could further decrease the levels of p-JNK1/2 and p-p38 MAPKs, but had no additional effect on the increase of p-ERK1/2 levels compared with that in the nonpreconditioned NSC group. These findings, together with our results, supported the involvement of the JNK1/2 and p38 MAPK pathway in the adjudin preconditioning neuroprotection. Notably, adjudin failing to attenuate the increased p-ERK1/2 levels was consistent with our observations that adjudin treatment did not change p-ERK1/2 levels in H 2 O 2 -induced NSC injury in vitro (Additional file 4: Figure S4).
Adjudin preconditioning could increase the expression of p-Akt both in vitro and in vivo. Akt belongs to a conserved family of signal transduction enzymes, which is the downstream target of phosphoinositide 3-kinase (PI3K) that not only plays an important part in regulating cellular activation and inflammatory responses, but also participates in cell growth, survival, metabolism, and apoptosis [77,78]. In the initial hours of cerebral ischemia, p-Akt protein level transiently rises in neurons, and this increment is supposed to be a neuroprotective response [79]. The phosphorylation of Akt could activate downstream proteins such as Bcl-2-associated death protein (BAD) and caspase 9, thereby inhibiting the Baxdependent apoptosis pathway and blocking cytochrome c-mediated caspase 9 activation [78,80]. In our study, we found that the level of p-Akt was elevated in the adjudin-preconditioned NSC group compared with that of the nonpreconditioned NSC group both in vivo and in vitro. Therefore, we demonstrated that the positive effect of adjudin preconditioning was mediated partially through a PI3K/Akt-dependent mechanism.
Massive cell death is induced in hours to days with additional injury resulting from increased free radicals and inflammatory responses since energy metabolism dysfunction and glutamate excitotoxicity occur in ischemic brain injury [35]. Adjudin preconditioning treatments applied to NSCs have been shown to enhance resistance to those insults by modulating MAPK and Akt signaling pathways, inhibiting the activation of microglia, downregulating IL-6, IL-1β, TNF-α, and iNOS, and upregulating antioxidant genes such as SOD2, catalase, and GCLC. Microglial cells are brain macrophages which serve important functions in many CNS diseases. Our previous work has shown that adjudin could significantly attenuate microglia activation and decrease proinflammatory cytokine release through inhibition of NF-κB activity in BV2 microglia [36], and here we also demonstrated that adjudin-pretreated NSCs could dramatically decrease H 2 O 2 -induced phosphorylation of p65 in NSCs (Additional file 5: Figure S5). Mitochondria play an important role in cytoprotection and preconditioning of cells. Generation of ROS in mitochondria is one of the main triggers that induce ischemic tolerance in the brain [81]. Madhavan et al. [82] demonstrated that NSCs resisted oxidative stress better than neurons because of their higher expression of antioxidant enzymes at a steady state and faster upregulation following oxidative stress stimulation. In this study, we showed that adjudin pretreatment significantly increased SOD2 and catalase activity and decreased iNOS levels in the ischemic penumbra of the cerebral and H 2 O 2 -induced NSC injury compared with nonpreconditioned NSCs. Thus, our results have provided evidence for a better effect of the antioxidative activity of preconditioned NSCs after focal cerebral I/R injury.
Besides neuroinflammation and oxidative stress, we also focused on the protective effects of adjudinpreconditioned NSCs on BBB permeability since maintaining BBB integrity is critical for reducing secondary brain injury following cerebral ischemia. As the core part of the BBB, tight junction proteins like JAM-A, claudin-5, occludin, and ZO-1 are located in the tightly sealed monolayer of brain endothelial cells (BEC) and conferred barrier function to preclude blood substances permeating into the brain parenchyma [35]. Many brain injuries such as ischemia and trauma lead to a disruption and reconstruction of tight junction proteins. In the present study, we demonstrated that, compared with nonpreconditioned NSCs, adjudin preconditioning could further reduce the leakage of IgG and EB by maintaining the protein levels of tight junction protein ZO-1 and occludin, leading to better outcomes in tMCAO mice. This protective effect was due to an attenuation of neuroinflammatory response and oxidative stress, which were capable of disrupting the epithelial barrier by decreasing tight junction protein expression [83].
Better understanding of molecules acting in neuroprotection might illuminate more treatment strategies of neurological disorders [84]. Transplanted NSCs exert beneficial effects not only via structural replacement, but also via neurotrophic actions [85,86]. An interesting finding of this study was the induction of neurotrophic factors with adjudin preconditioning. Numerous studies have demonstrated that grafted stem cells adapt to the ischemic microenvironment and facilitate homeostasis via the secretion of numerous tissue trophic factors that have beneficial effects on endogenous brain cells, as well as modulatory actions on both innate and adaptive immune responses [13]. Our work illustrated that compared with the nonpreconditioning group, adjudin preconditioning increased the expression of BDNF significantly in the ipsilateral brain 3 days after transplantation. Concomitantly, the heightened expression of BDNF, GDNF, and NGF in vitro in adjudin-pretreated NSCs was consistent with our observations in vivo, further demonstrating the neuroprotection of NSCs preconditioned by adjudin. BDNF withstood cerebral ischemic injury by means of upregulating antioxidant enzymes and mainly interfering with apoptotic pathways [87]. Greenberg et al. [88] found that the Akt pathway was an important downstream signaling pathway of BDNF, and via this pathway BDNF protected tissue from injury and fostered neuronal plasticity. Meanwhile, Lu et al. illustrated that the role of BNDF in hippocampal neurogenesis was mediated by ERK1/2 signaling pathways [89]. Moreover, Almeida et al. revealed that the exposure of neurons to BDNF stimulates CREB phosphorylation and activation via both MAPK and PI3K/Akt pathways. CREB was capable of regulating BDNF gene transcription directly, which suggested that a positivefeedback loop might be operating in some cell populations that were resistant to brain injury [90]. These findings, together with our results, supported that the neuroprotective effects of NSCs and adjudin-preconditioned NSCs were not through one way alone. Instead, they crosstalked with each other via many different pathways.
Although our work showed a better neuroprotective function of adjudin-preconditioned NSCs on I/R-induced brain injury, and adjudin may become a promising drug for clinical use that combines with stem cell-based therapy, further research is required before applying it to clinical research. The advantages of stem cell-based therapy are that grafted cells are not only able to secrete a plethora of soluble molecules to modulate the activation of host microglia/macrophages, thus modifying the release of inflammatory mediators and inhibiting oxidative stress, and thereby stabilizing the BBB, but are also capable of directly increasing cell proliferation within the SVZ, potentiating neuroblast migration, augmenting periischemic angiogenesis, positively affecting the differentiation of endogenous neuroblasts and plasticity within the ischemic tissue. In addition, they could directly differentiate into postmitotic neurons, astrocytes, or oligodendrocytes to establish new neural circuits, and finally attenuate ischemic brain injury and improve neurobehavioral recovery [13,15]. In order to examine whether adjudin preconditioning could achieve a better therapeutic effect and promote the transformation of adjudin to clinical use, more long-term experiments should be carried out. In this study we have included a 35-day study that has already shown encouraging results, but limitations also exist. Previous study showed that NSCs could survive and differentiate into functional neurons, attenuate infarction, and improve neurobehavioral recovery after stroke [91]. To further confirm the role of adjudin and study the mechanism of adjudin-pretreated NSCs in protecting the brain from ischemia injury, long-term experiments aiming to observe the number, localization, and differentiation status of transplanted cells in the ischemic brain are needed in future studies. Furthermore, adjudin has been demonstrated to have no apparent side effects in treated animals [33], but long-term safety remains a concern for clinical use when combined with cell sources. Although Lindvall and Kokaia [92] have demonstrated that no tumors were detected in five patients with Batten disease 2 years after transplantation of human fetal NSCs, the harsh microenvironment after I/R brain injury might influence tumorigenesis and differentiation profiles of grafted NSCs [93]. More observations in larger cohorts will be required for confirmation in the near future, with more definite conclusions regarding the safety of stem cell treatment to be made.
Conclusion
In summary, our study demonstrated that adjudin preconditioning promoted NSC survival under H 2 O 2 stimulation in vitro, reprogrammed NSCs to tolerate neuroinflammation and oxidative stress, and expressed higher levels of neurotrophic factors, resulting in augmenting the therapeutic efficiency of NSCs in transient focal ischemia in vivo. The protective effect of adjudin was achieved through activating the Akt pathway and inhibiting the p-p38 and p-JNK MAPK pathway. The beneficial effects of adjudin preconditioning may represent a safe approach for future clinical applications.
|
v3-fos-license
|
2018-06-16T01:19:55.103Z
|
1971-12-01T00:00:00.000
|
49235639
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ejournals.bc.edu/index.php/ital/article/download/5598/4955",
"pdf_hash": "db59d9980b82c3d235d80876b192a7df77293908",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:836",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "db59d9980b82c3d235d80876b192a7df77293908",
"year": 1971
}
|
pes2o/s2orc
|
A COMPUTER SYSTEM FOR EFFECTIVE MANAGEMENT OF A MEDICAL LIBRARY NETWORK
TRIPS (TALON Reporting and Information Processing System) is an interactive software system for generating reports to NLM on regional medical library network activity and constitutes a vital part of a network management information system (NEMIS) for the South Central Regional Medical Library Program. Implemented on a PDP-lOfSRU 1108 interfaced system, TRIPS accepts paper tape input describing network transactions and generates output statistics on disposition of requests, elapsed time for completing filled requests, time to clear unfilled requests, arrival time distribution of requests by day of month, and various other measures of activity andjor performance. Emphasized in the TRIPS design are flexibility, extensibility, and system integrity. Processing costs, neglecting preparation of input which may be accomplished in several ways, are estimated at $.05 per transaction, a transaction being the transmittal of a message from one library to another.
INTRODUCTION
The TALON (Texas, Arkansas, Louisiana, Oklahoma, and New Mexico) Regional Medical Library Program is one of twelve regional programs established by the Medical Library Assistance Act of 1965.The regional programs form an intermediate link in a national biomedical information network with the National Library of Medicine ( NLM) at the apex.Unlike most of the regional programs that formed around a single library, TALON evolved as a consortium of eleven large medical resource libraries with administrative headquarters in Dallas.A major focus of the TALON program is the maintenance of a document delivery service, created in March 1970, to enable rapid access to published medical information.TWX units located in ten of the resource libraries and at TALON headquarters in Dallas comprise the major communication channel.
In July 1970 a joint program was initiated to develop a statistical reporting system for the TALON document delivery network.Design and development of the system was done by the Computer Science/Operations Research Center at Southern Methodist University, while training and operational procedures were developed by TALON personnel.Both parties in the effort view the statistical reporting system as a vital first step in providing TALON administrators with a comprehensive network management information system (NEMIS ).An overview of this statistical reporting system, designated as TRIPS (TALON Reporting and Information Processing Systems), and its relation to NEMIS is discussed in the following paragraphs.The objectives and design characteristics of NEMIS are stated in ( 1 ).
DESIGN REQUIREMENTS
There were two considerations for requirements for a network management information service ( NEMIS ) for TALON: 1) In what environment would TALON function?2) What should be the objectives of a network management information service and what part does a statistical reporting system play in its development?The TALON staff and the design team spent an intensive period in joint discussion of these two questions.
TALON Environment
The TALON document delivery network operates in an expansive geographical area (Figure 1).The decentralized structure of the network enables information transfer between any two resource libraries.In addition TALON headquarters serves as a switching center, by accepting loan requests, locating documents, and relaying requests to holding libraries.
A requirement placed on TALON by NLM is the submission of monthly, quarterly, and annual reports giving statistical data on network activity.These statistics provide details on: 1) requests received by channel used (mail, telephone, TWX, other), 2) disposition of requests (rejected, accepted and filled, accepted and unfilled), 3) response time for filled requests, 4) response time for unfilled requests, 5) most frequent user libraries, 6) requests received from each of the other regions, and 7) non-MEDLARS reference inquiries.Monthly reports require cumulative statistics on year-to-date performance, and each of the eleven resource libraries and TALON headquarters is required to submit a report on its activity.
Needs and Objectives
While the immediate need of the TALON network was to develop a system to eliminate manual preparation of NLM reports, an initial decision was made to develop software also capable of assisting TALON management in policy and decision making.Eventual need for a network management information system ( NEMIS) being recognized, the TALON reporting and information processing system (TRIPS) was designed as the first step in the creation of NEMIS.
Provision of information in a form suitable for analytical studies of policy and decision making-e.g., the message distribution problem described by Nance ( 2) -placed some stringent requirements on TRIPS.For instance, the identification of primitive data elements could not be made from report considerations only; an overall decision had to be made that no sub-item of information would ever be required for a data element.In addition the system demanded flexibility and extensibility, since it was to operate in a highly dynamic environment.These characteristics are quite apparent in the design of TRIPS.
TRIPS DESIGN
TRIPS is viewed as a system consisting of hardware and software components.The description of this system considers: 1) the input, 2) the software subsystems (set of programs), 3) hardware components, and 4) the output.Emphasis is placed on providing an overview, and no effort is made to give a detailed description.
The environment in which TRIPS is to operate is defined in a single file ( FOR25.DAT).This file assigns network parameters, e.g., number of reporting libraries, library codes, and library titles.The file is accessed by subprograms written in FORTRAN IV and DYSTAL ( 3), the latter being a set of FORTRAN IV subprograms, termed DYSTAL functions, that perform primitive list processing and dynamic storage allocation operations.
Because it requires only FORTRAN IV TRIPS can be implemented easily on most computers.
Input
A transaction log, maintained by each regional library and TALON headquarters, constitutes the basic input to TRIPS.Copies of log sheets are used to create paper tape description of the transactions.If and when compatibility is achieved between standard TWX units and telephone entry to computer systems, the input could be entered directly by each regional library.(This is technically possible at present.)Currently, TALON headquarters is converting the transaction descriptions to machine readable form.Initial data entry under normal circumstances is pictured in Figure 2, which shows the sequence of operations and file accesses in two phases: 1) data entry and 2) report generation.Data entry in tum comprises 1) collecting statistics, 2) diagnosis and verification of input data and 3) backup of original verified input data.TRIPS is designed to be extremely sensitive to input data.All data is subjected to an error analysis, and a specific file (FOR22.DAT ) is used to collect errors detected or diagnosed in the error analysis routine.Only verified data records are transmitted to the statistical accumulation file (FOR20.DAT).
Software Subsystems
TRIPS comprises seven subsystems or modules.Within each module are several FORTRAN IV subprograms, DYSTAL function and/ or PDP-10 systems programs discussed under hardware components in the following section: NEWY: Run at the beginning of each year, NEWY builds an in-core data structure and transfers it to disk for each resource library in the network.It further creates the original data backup disk file ( FOR23.DAT).After disk formatting, RECORD (the accessing and storage module) may be activated to begin accumulating statistics for the new year.A major concern in any management information system is the system integrity.In addition to the diagnosis of input data, TRIPS concatenates sequential copies of disk file FOR23.DAT to provide a magnetic tape backup containing all valid data records for the current year.A failsafe tape, containing all TRIPS programs, is also maintained.
Hardware Components
Conversion of transaction information to machine readable form is done off line currently.Using a standard TWX with ASCII code, paper tapes are created and spliced together.Fed through a paper tape reader to a PDP-10 (Digital Equipment Company), the input data is submitted to TRIPS.Control of TRIPS is interactive, with the user monitoring program execution from a teletYPe.All file operations are accomplished using the PDP-10 via the teletype, and the output reports are created on a high-speed line printer.With SM,U's PDP-10 and SRU 1108 interface, report generation can be done on line printers at remote terminals to the SRU 1108 as well.
Output
TRIPS output consists of a report for each library in the network and a composite for the entire network.The report may be limited to reimbursable statistics or include all statistics.Information includes: 1) Errors encountered in the input phase, 2) Number of requests received by channel, 3 ) Disposition of requests (i.e., rejected, accepted/ filled, accepted/ unfilled, etc. ) , 4) Elapsed time for completing :filled requests or clearing unfilled requests, 5) Geographic origin of requests, 6) Titles for which no holdings were located within the region, 7 ) TYPes of requesting institutions, 8) Arrival time distribution of requests by day of month, 9) Invoice for reimbursement by TALON, 10 ) Node/ network dependency coefficient as described by ( 4).
SUMMARY
TRIPS is now entering its operational phase.Training of personnel at the resource libraries is concluded, and data on transactions are being entered into the system.Input errors have decreased significantly ( from fifteen or twenty percent to approximately two percent ).TALON personnel are enthusiastic, and needless to say the regional library staffs are happy to see a bothersome, time-consuming manual task eliminated.
In summary, the following characteristics of TRIPS deserve repeating: 1) With its modular construction, it is flexible and extensible.
2) Implemented in DYSTAL and FORTRAN IV, it should allow installation on most computers without major modifications.3) Designed to operate in an interactive environment, it can be modified easily to function in a batch processing environment.4) TRIPS is extremely sensitive to system integrity, providing diagnosis of input data, reporting of errors, magnetic tape backup of data files, and a system failsafe tape.5) Definition of primitive data elements and the structural design of TRIPS enable it to serve as the nucleus of a network management information system ( NEMIS) as well as to generate reports required by NLM. 6) Currently accepting paper tape as the input medium, TRIPS could be modified easily to accept punched card input and with more extensive changes could derive the input information during the message transfer among libraries.Finally, the processing cost of operating TRIPS, neglecting the conversion to paper tape, is estimated to be $.05 per transaction (a message transfer from one library to another).
Extensive and thorough documentation of TRIPS has been provided.Availability of this documentation is under review by the funding agency.
•Fig. 1 .
Fig. 1.Location of the Eleven Resource Libraries and TALON Headquarters.
|
v3-fos-license
|
2024-06-24T15:08:15.618Z
|
2024-06-22T00:00:00.000
|
270693646
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "5e2b25eb8f7c1564461997f1c7d1526abace8a94",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:837",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"sha1": "d60af3c430cbfdbe82ec2cf02e81a71afc22842c",
"year": 2024
}
|
pes2o/s2orc
|
5G wavelength-division-multiplexing-based bidirectional optical wireless communication system with signal remodulation employing cascaded reflective semiconductor optical amplifiers
Compared with previous generations, fifth-generation communications can provide faster download and upload speeds and support a greater number of connected devices. Integrating fifth-generation signals with optical wireless communication systems provides promising ways to afford higher transmission rates and faster wireless connectivity. Here we report a fifth-generation wavelength-division-multiplexing-based bidirectional optical wireless communication system with signal remodulation employing cascaded reflective semiconductor optical amplifiers to effectively remove the downstream data for uplink transmission. It shows a fifth-generation wavelength-division-multiplexing-based bidirectional optical wireless communication system using four wavelengths for communication. The uplink performance is substantially enhanced by using two reflective semiconductor optical amplifiers to remove the downstream data. The system achieves an aggregate transmission rate of 36.4 Gbit/s for both downlink and uplink transmissions over a 100-m optical wireless link. This demonstrated fifth-generation wavelength-division-multiplexing-based bidirectional optical wireless communication system employing cascaded reflective semiconductor optical amplifiers holds great potential for enhancing fifth-generation advanced communication capabilities.
Fifth-generation (5G) communications deliver a substantial boost in transmission rates due to the combination of expanded bandwidth and sophisticated communication techniques [1][2][3][4] .It enables new applications such as mixed reality, cloud gaming, and real-time IoT applications.With the rapid development of optical wireless communication (OWC) systems, they have gained attention due to their potential for providing high-speed and high-capacity optical communications, especially for scenarios where radio frequency communications are challenging 5,6 .The integration of 5G signals with OWC systems (as illustrated in Fig. 1) therefore offers promising avenues for providing high transmission rates and meeting the growing demand for faster and more reliable wireless connectivity.Former research presented the feasibility of building an actively controllable beam steering OWC system employing an integrated optical phased array 7 .However, it did not directly connect 5G signal through optical wireless links.5G signals through optical wireless links are important for the integration of 5G signals with OWC systems.One of the characteristics of the 5G OWC system is that it is directly related to 5G communications.In actual scenarios, a 5G OWC system should be developed instead of an OWC system that is not connected to 5G communications.Besides, such an actively controllable beam steering OWC system is a unidirectional OWC system, not a bidirectional OWC system.A bidirectional OWC system allows simultaneous transmission in both downlink and uplink transmissions.It enables free space reuse and better spectrum utilization, leading to higher transmission rates and capacities.Furthermore, constructing a bidirectional OWC system with phase modulation and remotely injection-locked distributed feedback laser diode (LD) was shown to be practicable 8 .Nevertheless, it presents challenges in terms of converting phase modulation to intensity modulation using remotely injection-locked distributed feedback LD.In addition, four-level pulse amplitude modulation and non-return-tozero signals are used and transmitted in this bidirectional OWC system.However, the third-generation partnership project (3GPP) specifications do not define these signal types.In contrast, the 5G signal defined by 3GPP specifications uses an orthogonal frequency-division multiplexing (OFDM) signal for both downlink and uplink transmissions 9 .Obviously, there is room for improvement in the 5G signal transmission supported by 3GPP specifications.Moreover, a 40-Gbit/s downlink signal transport using an OFDM signal with single sideband modulation, and a 10-Gbit/s uplink signal transport employing a reflective semiconductor optical amplifier (RSOA) for remodulation were realized 10 .However, it has not aligned with 5G signal transmission.Bidirectional lightwave transport systems should be developed to align with 5G communications.In this demonstration, a 5G wavelength-division-multiplexing (WDM)-based bidirectional OWC system with signal remodulation employing cascaded RSOAs to effectively remove the downstream data for uplink transmission is practically implemented.It shows a 5G WDM-based bidirectional OWC system using four optical wavelengths and two RSOAs as a demonstration.For downlink transmission, each of the four optical wavelengths is used to deliver a 9.1-Gbit/s/28-GHz millimeter-wave (MMW) signal using 16-quadrature amplitude modulation (QAM)-OFDM modulation.The downstream modulated data on the four optical wavelengths is effectually erased by two RSOAs, and these wavelengths are then reused for upstream optical carriers.The uplink performance is substantially enhanced by utilizing two RSOAs to remove the downstream data.The four upstream optical wavelengths are modulated by an MZM with 9.1-Gbit/s/24-GHz MMW signal using 16-QAM-OFDM modulation.This 5G WDM-based bidirectional OWC system achieves an aggregate transmission rate of 36.4Gbit/s (9.1 Gbit/s × 4) for both downstream and upstream data.Through 100-m optical wireless link, good bit error rates (BERs) (<3.8 × 10 −3 forward error correction (FEC) limit) and error vector magnitudes (EVMs) (<12.5% 3GPP limit) performance 11,12 , clear constellation diagrams, and flat electrical spectra are attained for downlink/uplink transmissions.
5G WDM-based bidirectional OWC systems are expected to use 4 laser sources for the upstream.However, this process increases the complexity to 5G WDM-based bidirectional OWC systems.For an actual implementation of 5G WDM-based bidirectional OWC systems, the promotion of simple upstream light source is vital.Employing two RSOAs to remove the downstream data offers an attractive solution because it does not require the use of four lasers with chosen wavelengths for uplink transmission.By utilizing two RSOAs instead of four lasers with selected wavelengths, the complexity associated with wavelength selection can be avoided.In addition, if four inventory lasers with different wavelengths but not selected wavelengths are used for uplink transmission, the WDM demultiplexer (DEMUX) at the upstream receiving site cannot accurately demultiplex the upstream wavelength.Inaccurate demultiplexing due to the use of lasers with non-selected wavelengths will degrade uplink transmission performance.Signal remodulation can be achieved through different devices such as Fabry-Perot (FP) LD 13 , RSOA 14,15 , and electro-absorption modulator 16 .Due to the limited bandwidth of FP LD and RSOA, it is challenging for FP LD and RSOA to provide high-speed data streams for uplink transmission.For the electro-absorption modulator, the uplink transmission performance based on FP LD and RSOA is better than that based on the electro-absorption modulator 16 .In our proposed bidirectional OWC system, the four upstream wavelengths are modulated by a Mach-Zehnder modulator (MZM).The bandwidth of MZM is greatly higher than that of FP LD and RSOA, which can provide higher data streams for uplink transmission.Besides, previous works on the signal remodulation of optical MMW signals have been reported 17,18 .However, complex vertical cavity surface emitting laser-based phase modulation to intensity modulation converter and sophisticated centralized light source for uplink transmission are required.Furthermore, since only one optical carrier is reused in the uplink, the uplink transmission rate will be much lower than that associated with operation using multiple optical carriers.Furthermore, downstream data erasure can be achieved by using an RSOA with low saturation input power.When operating an RSOA with low saturation input power, increasing the RSOA input power can further erase the downstream data.However, owing to the limited slope of the output power-input power curve, this approach may encounter challenges related to incomplete erasure, resulting in residual downstream data.
Using two RSOAs to completely erase the downstream data and then using MZM to modulate four optical carriers is worthwhile as they avoid the requirement for multiple lasers with selected wavelengths and the bandwidth limitation of using FP LD, RSOA, and electro-absorption modulator.Furthermore, they do not require complex vertical-cavity surface-emitting laser-based phase modulation to intensity modulation converter and sophisticated centralized light source.Additionally, they also avoid the constraint of selecting an RSOA with low saturation input power.By using RSOAs to erase the downstream data, the same optical carriers can be reused for uplink transmission, thereby optimizing spectral efficiency.Moreover, RSOAs offer a compact and straightforward solution for implementing bidirectional OWC systems.This eliminates the need for separate optical devices or complex signal processing techniques, simplifying system architecture and reducing implementation complexity.Additionally, the use of RSOAs provides greater flexibility in system deployment, as it eliminates the need for dedicated wavelength management.Using RSOAs in bidirectional OWC systems offers substantial advantages in spectral efficiency, simplicity, and flexibility in deployment, meeting the demands of efficient 5G WDM-based bidirectional OWC systems.The deployment of 5G WDMbased bidirectional OWC system using cascaded RSOAs for signal remodulation is an important step in realizing 5G communications
Results and Discussion
Downlink/uplink BERs/EVMs and associated constellation diagrams The downlink/uplink BERs under different received MMW powers over a 100-m optical wireless link are exhibited in Fig. 2a.To demonstrate the remodulation, two wavelengths of λ 1 (1549.3nm) and λ 2 (1550.9nm) are chosen for downlink and uplink BER performance evaluation.It is to be seen that the BER performance is nearly the same for λ 1 and λ 2 in the downlink and uplink transmissions.Results indicate that the choice of wavelength has a minimal impact on the downlink/uplink BER performance.Moreover, due to the utilization of a 30-GHz photodiode (PD) in the experiment, 28-GHz signals require a higher received MMW power compared to 24-GHz signals for a similar BER of the system.For downlink OFDM signal transmission, a 3.8 × 10 −3 (FEC limit) BER is attained at received MMW powers of −26.9 (λ 1 , 1549.3 nm) and −27 (λ 2 , 1550.9 nm) dBm, as measured by a spectrum analyzer.For uplink OFDM signal transmission (two RSOAs), a 3.8 × 10 −3 BER is acquired at received MMW powers of −27.2 (λ 1 ) and −27.3 (λ 2 ) dBm.To have more correlation with the number of RSOA and uplink BER performance, we remove one RSOA to evaluate the uplink BERs over 100-m optical wireless link.For uplink OFDM signal transmission (one RSOA), a 3.8×10 −3 BER is acquired at received MMW powers of −23.9 (λ 1 ) and −24.1 (λ 2 ) dBm.At 3.8 × 10 −3 BER, power penalty improvements of 3.3 dB (λ 1 ) and 3.2 dB (λ 2 ) are observed when using two RSOAs.The use of two RSOAs contributes to the further erasure of downstream modulated data.There is no residual downstream modulated data and the uplink BER is substantially improved.Furthermore, Fig. 2b shows the EVMs at wavelengths of λ 1 and λ 2 (downlink/uplink) and at different received MMW powers.Through a 100-m optical wireless link, the EVMs of downlink/uplink 16-QAM-OFDM signals remain below the 12.5% 3GPP limit when the received MMW powers are higher than −28.9 (λ 1 ), −29.1 (λ 2 ), −29.3 (λ 1 , two RSOAs, uplink), and −29.4 (λ 2 , two RSOAs, uplink) dBm, respectively.The downlink 9.1-Gbit/s/28-GHz 16-QAM-OFDM signal has a slightly lower receiver sensitivity compared to the uplink 9.1-Gbit/s/24-GHz 16-QAM-OFDM signal.Moreover, since EVM is related to the carrier frequency and peak-to-average power ratio, a 9.1-Gbit/s/ 24-GHz 16-QAM-OFDM signal has a lower peak-to-average power ratio, contributing to a lower EVM at the same received power 19,20 .To verify the relationship between the number of RSOA and uplink EVM performance, we change two RSOAs to one RSOA to evaluate the EVMs.For the uplink signal using 16-QAM-OFDM modulation (one RSOA), a 12.5% EVM is obtained at received MMW powers of −25.5 (λ 1 ) and −25.7 (λ 2 ) dBm.At 12.5% EVM, power penalty degradations of 3.8 dB (λ 1 ) and 3.7 dB (λ 2 ) exist when using one RSOA.When using one RSOA, the downstream modulated data is incompletely erased, leading to interference to degrade the uplink EVM performance.As for the constellation diagrams, Figs.2c and d show the connected constellation diagrams of 9.1-Gbit/s/28-GHz and 9.1-Gbit/s/ 24-GHz 16-QAM-OFDM signals at a wavelength of λ 1 (downlink/uplink), over 100-m optical wireless link and at −26.9-dBm received MMW power.Clearly, each downlink/uplink 16-QAM-OFDM signal has a distinct constellation diagram with BER of 3.8 × 10 −3 (Fig. 2c) and 2.6 × 10 −3 (Fig. 2d).Low BERs, low EVMs, and clear constellation diagrams support the feasibility and utility of using 5G MMW signals over a bidirectional OWC with two RSOAs.
Comparisons for constellation diagrams with scenarios involving no RSOA, one RSOA, and two RSOAs
To clarify the improvement atained by employing two RSOAs, comparisons for constellation diagrams with scenarios involving no RSOA, one RSOA, and two RSOAs are presented.Figures 3a-c show the related constellation diagrams of 9.1-Gbit/s/24-GHz 16-QAM-OFDM signal at wavelength of λ 1 (uplink) through 100-m optical wireless link and at −27.2-dBm received MMW power, in the scenarios with no RSOA, one RSOA, and two RSOAs.
In the scenario with no RSOA, the constellation diagram shows a blurred pattern with 4.7×10 −1 BER (Fig. 3a).In the scenario with one RSOA, the constellation diagram presents a somewhat blurred pattern with 5.4×10 −3 BER (Fig. 3b).In the scenario with two RSOAs, however, a clear and distinct constellation diagram with 2.6×10 −3 BER (Fig. 3c) is attained.In the scenario with no RSOA, the downstream modulated data is not erased and this can lead to the simultaneous modulation of downstream and upstream data on the same optical carrier.This situation brings on strong interference that degrades uplink performance and results in a blurred constellation diagram.
In the scenario with one RSOA, the downstream modulated data is incompletely suppressed and this can lead to partial downstream data and all upstream data modulating the same optical carrier.This situation can cause interference, leading to reduced uplink performance and somewhat blurred constellation diagrams.As for the scenario with two RSOAs, the downstream modulated data is virtually eliminated, ensuring that only upstream modulated data exists on the optical carrier.This situation will not cause interference from the downstream modulated data therefore improving the uplink performance and making the constellation diagrams clearly visible.The clear and distinct constellation diagrams show that this proposed bidirectional OWC systems with signal remodulation employing cascaded RSOAs is capable of transmitting 5G signals at MMW frequencies.
The electrical spectra of 9.1-Gbit/s/28-GHz and 9.1-Gbit/s/24-GHz 16-QAM-OFDM signals, in the scenarios of using one RSOA and two RSOAs The electrical spectra of 9.1-Gbit/s/24-GHz 16-QAM-OFDM signal at λ 1 wavelength in the scenarios of using (b) one RSOA (RSOA, reflective semiconductor optical amplifier) and c two RSOAs, through 100-m optical wireless link and at −27.2-dBm received MMW power.at −26.9 and −27.2-dBm received MMW powers.The electrical spectrum of the downlink OFDM signal (Fig. 4a) has an acceptable amplitude fluctuation within ±3.5 dB.Since higher frequency signals experience higher propagation losses, the downlink OFDM signal has slightly higher powers in 23-31 GHz frequencies compared to the 31-33 GHz frequencies 21,22 .In the scenario when only one RSOA is employed, the electrical spectrum of the uplink OFDM signal (Fig. 4b) exhibits a large amplitude fluctuation within ±6.8 dB.Such a large amplitude fluctuation is attributed to the incomplete suppression of downstream modulated data and consequently results in worse signal quality and large amplitude fluctuations.In the scenario of using two RSOAs, the electrical spectrum of the uplink OFDM signal (Fig. 4c) shows small amplitude fluctuations within ±3.1 dB.This shows that the downstream modulated data is virtually suppressed.Using two RSOAs mitigates interference from downstream modulated data, resulting in better uplink performance and smaller amplitude fluctuations.
The subcarrier EVMs of the 9.1-Gbit/s/28-GHz (downlink) and 9.1-Gbit/s/24-GHz (uplink) 16-QAM-OFDM signals Figure 5 exhibits the subcarrier EVMs of the 9.1-Gbit/s/28-GHz (downlink) and 9.1-Gbit/s/24-GHz (uplink) 16-QAM-OFDM signals at different subcarrier indices, in the scenarios of using one RSOA and two RSOAs.Similarly, λ 1 (1549.3nm) and λ 2 (1550.9nm) are picked for downlink and uplink subcarrier EVMs evaluation.After a 100-m optical wireless link, the EVMs of downlink OFDM signals remain below the 3GPP limit of 12.5% for subcarrier indices smaller than 104 (λ 1 ) and 105 (λ 2 ).Apparently, as the subcarrier index increases, the EVM also increases.The average EVMs for λ 1 and λ 2 are around 9.4% and 9.1%, respectively, both below the 12.5% 3GPP limit.It is also observed that through a 100-m optical wireless link and in the scenario of using two RSOAs, the EVMs of the uplink OFDM signals are below the 3GPP limit when the subcarrier indices are smaller than 106 (λ 1 ) and 107 (λ 2 ).The average EVMs for λ 1 and λ 2 are around 8.6% and 8.4%, respectively, both below the 12.5% 3GPP limit.In addition, with only one RSOA, the EVMs of uplink OFDM signals remain below the 3GPP requirement when the subcarrier indices are less than 82 (λ 1 ) and 83 (λ 2 ).
Note that the subcarrier indices of 82 (λ 1 ) and 83 (λ 2 ) (using one RSOA) are much smaller than the corresponding values of 106 (λ 1 ) and 107 (λ 2 ) (using two RSOAs).The average EVMs are approximately 12.8% (λ 3 ) and 12.6% (λ 4 ), which are above the 3GPP limit of 12.5%.The high average EVM is caused by using one RSOA to incompletely erase the downstream modulated data.Two RSOAs can further remove the downstream data.Therefore, the uplink EVMs can be further reduced by using two RSOAs to effectively remove the downstream data.The average downlink/uplink EVMs below the 12.5% 3GPP limit show the practicability of building 5G WDM-based bidirectional OWC systems employing cascaded RSOAs to efficiently remove the downstream data.
Using QAM-OFDM modulation with bit-loading technique is a powerful method to increase aggregate transmission rates [23][24][25][26] .Subcarriers with higher signal-to-noise ratio support higher-QAM-OFDM modulation (e.g., 64-QAM-OFDM or 128-QAM-OFDM).Conversely, subcarriers with lower signal-to-noise ratio support lower-QAM-OFDM modulation (e.g., 4-QAM-OFDM or 16-QAM-OFDM).There is an inverse relationship between signal-to-noise ratio and the square of EVM, subcarriers with lower EVM can adopt higher-QAM-OFDM modulation, while subcarriers with higher EVM can adopt lower-QAM-OFDM modulation [27][28][29] .This dynamic allocation of 2 n -QAM-OFDM modulation optimizes the use of available resources and results in higher aggregate transmission rates.By adapting the modulation to the specific quality of each subcarrier, the system maximizes its overall data transmission capacity.
Methods 5G WDM-based bidirectional OWC systems with cascaded RSOAs
Figure 6a depicts the configuration of the 5G WDM-based bidirectional OWC systems with signal remodulation employing cascaded RSOAs.An actual point-to-point system is built rather than a simulated one.The output of the broadband light source, with 15 nm bandwidth (1541-1556 nm) and ±1.4 dB flatness, is boosted by an erbium-doped fiber amplifier (EDFA) and efficiently separated into four wavelengths by a 1×4 WDM DEMUX.The four wavelengths of λ 1 (1549.3nm), λ 2 (1550.9nm), λ 3 (1552.5nm), and λ 4 (1554.1 nm) are multiplexed by a 4 × 1 WDM MUX, and then supplied to an MZM via a polarization controller.Both the WDM MUX and DEMUX feature 1.6 nm wavelength spacing, 0.6 nm channel passband width, and >30 dB channel isolation.The 9.1-Gbit/s/10-GHz 16-QAM-OFDM signal generated by the OFDM transmitter is upconverted to a 9.1-Gbit/s/28-GHz signal using a mixer with an 18 GHz local oscillator signal.The upconverted signal then drives an MZM through a modulator driver.The optical spectra before and after the MZM are shown in Figs.6b, c (insert (i) and (ii) of Fig. 6a).Four modulated optical wavelengths travel through an EDFA with a flat amplifier gain of ±1.3 dB over 30 nm (1530-1560 nm) for WDM applications.A variable optical attenuator is positioned at the beginning of the optical wireless link so that the optical power emitted into free space can be optimized to achieve the best link performance.Via two optical circulators (OC1 and OC2), the optical signals are delivered through a 100-m optical wireless link using a fiber collimator with 0.06°divergence angle and 1050-1620 nm wavelength range at the transmitting site and an optical dish antenna with a doublet lens at the receiving site.An optical dish antenna is a dish antenna with 90 cm diameter and high reflectivity (>99%) mirrors that firstly focus the laser light to a mirror at the focal point and next reflect the laser light to the fiber ferrule of the doublet lens.The diameter of the laser light on the optical dish antenna is 20 cm.Since the system is based on point-topoint OWC link and the laser travels through a 100-m optical wireless link, fully steering the laser beam to the input of the fiber ferrule remains a challenge 30,31 .An optical equipment should be arranged to decrease the laser beam size to fully steer the laser beam to the input of the fiber ferrule.A Doublet lens at the receiving site is an optical equipment that steers the laser beam to the input of the fiber ferrule.After circulation by the OC2, the optical signal with four wavelengths is split into two parts utilizing an optical splitter.One part of the optical signal is demultiplexed by a 1×4 WDM DEMUX, which has the same characteristics as the WDM DEMUX at the transmitting site.The optical spectra before and after the WDM DEMUX are presented in Figs.6d, e (insert (iii) and (iv) of Fig. 6a).Due to a large channel passband width of 0.6 nm, the modulation sidebands can be detected by the WDM DEMUX.The demultiplexed wavelength with downlink OFDM signal is received by a 30-GHz PD, amplified by a low noise amplifier with frequencies of 2-30 GHz and a noise figure of 2.4 dB, and transmitted to a digital sampling oscilloscope for downlink performance estimation.Another part of the optical signal is injected into two RSOAs (RSOA1 and RSOA2) via the OC3 and the OC4 to virtually erase the downstream modulated data and reproduce four pure optical carriers for uplink transmission.The optical spectra before and after two RSOAs are exhibited in Figs.6f, g (insert (v) and (vi) of Fig. 6a).RSOA1 has a bandwidth of 3.6 GHz and a seeding power range of −22 to −10 dBm.The optical properties of RSOA2 are the same as RSOA1.The function of RSOA1 is to erase the downstream data from incoming optical signals.The optical carriers with residual data are subsequently injected into RSOA2, which further erases the residual data.The downstream modulated data is effectively suppressed through unsaturated RSOA1 and gain-saturated RSOA2 32 .RSOA1 operates at 60 mA bias current and −18 dBm seeding power, and RSOA2 operates at 100 mA bias current and −12 dBm seeding power.For uplink transmission, a 9.1-Gbit/s/10-GHz 16-QAM-OFDM signal is upconverted to a 9.1-Gbit/s/24-GHz signal using a mixer with a 14 GHz local oscillator signal.Then, the upconverted signal drives an MZM through a modulator driver.After amplification by an EDFA, a variable optical attenuator optimally controls the optical powers.Through routing by two optical circulators (OC2 and OC1), the optical signal with four wavelengths is transmitted wirelessly through a 100-m optical wireless link using a doublet lens with an optical dish antenna.At the receiving site, a fiber collimator is used to collect the transmitted optical signal.The received optical signal is demultiplexed by a 1×4 WDM DEMUX.The demultiplexed wavelength with uplink OFDM signal is received by a 30-GHz PD and amplified by a low noise amplifier with frequencies of 2-30 GHz.The enhanced electrical signal is next fed into a digital storage oscilloscope for uplink performance analysis.
Additionally, a table (Table 1) outlines the chief parameters of the experiment such as 16-QAM-OFDM, signal format, fiber collimator, optical dish antenna, doublet lens, PD, low noise amplifier, and RSOA1/RSOA2.
16-QAM-OFDM modulation/demodulation and data rate calculation
The OFDM modulation comprises serial-to-parallel conversion, QAM symbol mapping, inverse fast Fourier transform, parallel-to-serial, cyclic prefix insertion, and digital-to-analog conversion.The number of data subcarriers, pilot subcarriers, CP samples, and the length of FFT size is set to 120, 8, 16, and 512, respectively, in an OFDM symbol.The pilot subcarriers are essential for tracking and compensating for phase shifts introduced by the channel.They help maintain the integrity of the modulated data carried by the data subcarriers 33,34 .512 samples exist in the data set.16 CP samples are inserted into the data set.Therefore, the entire number of samples becomes 528.In each IFFT operation, 480 (120 carriers × 4 bits) informative data bits are transmitted.For a set of 512 data samples, there is 480/512 = 0.9375 bit/ sample.To add the 16 samples CP to the 512 samples, an entire of 528 (512 + 16) samples are obtained.For a set of 528 data samples, there is 480/ 528 = 0.91 bit/sample.Digital-to-analog conversion transports samples at 10 GSa/s rate.The data rate can be counted as 0.91 × 10 = 9.1 Gbit/s.Thus, the data rate acquired by transporting a given set of samples is 9.1 Gbit/s.For downlink transmission, each optical wavelength therefore carries a 9.1-Gbit/ s/28-GHz 16-QAM-OFDM signal.For uplink transmission, each optical wavelength therefore carries a 9.1-Gbit/s/24-GHz 16-QAM-OFDM signal.
Calculation for the divergence angle of the fiber collimator A 100-m optical wireless link comprises a fiber collimator with a divergence angle of θ (degree) at the transmitting site and an optical dish antenna with a doublet lens at the receiving site.Through a 100-m optical wireless link, the diameter of the laser light on the optical dish antenna (D) is 20 cm and can be expressed as From Eq. ( 1), the divergence angle (θ) of the fiber collimator can be derived as Results show that a 100-m optical wireless link can be successfully achieved by using a fiber collimator with a divergence angle of 0.06°at the transmitting site and an optical dish antenna with a laser diameter of 20 cm at the receiving site.
In addition, to fit an l-m free-space optical link with a set of doublet lenses into a 100-m optical wireless link with a fiber collimator and an optical dish antenna, through an l-m free-space optical link the laser diameter (d) should be equal to or less than 0.2 m (200 mm).For a doublet lens with 75 mm diameter and 150 mm focal length, the diameter of laser light at the transmitting site (d T ) is derived as 45 mm [2 × 150 (focal length) × 0.15 (fiber numerical aperture)].The corresponding beam radius and the divergence angle of laser light are derived as 3.6 μm and 24 × 10 −6 radians, respectively.With these conditions, we can obtain: l (maximum) is calculated as 4060 m, meaning that a 100-m optical wireless link using a fiber collimator at the transmitting site and an optical dish antenna at the receiving site is equivalent to a 4060-m free-space optical link utilizing a set of doublet lenses at the transmitting and receiving sites.The chief parameters of the experiment such as 16-QAM-OFDM (16-QAM-OFDM, 16-quadrature amplitude modulation-orthogonal frequency-division multiplexing), signal format, fiber collimator, optical dish antenna, doublet lens, PD (PD, photodiode), LNA (LNA, low noise amplifier), and RSOA1/RSOA2 (RSOA, reflective semiconductor optical amplifier).
Fig. 1 |
Fig. 1 | The integration of 5G (5G, fifth-generation) signals with OWC (OWC, optical wireless communication) systems.The integration of 5G signals with OWC systems offers promising avenues for providing high transmission rates and meeting the growing demand for faster and more reliable wireless connectivity.
Fig
Fig. 6 | 5G (5G, fifth-generation) WDM (WDM, wavelength-division-multiplexing)-based bidirectional OWC (OWC, optical wireless communication) systems with cascaded RSOAs (RSOAs, reflective semiconductor optical amplifiers).a Configuration of 5G WDM-based bidirectional OWC systems with signal remodulation employing cascaded RSOAs.The optical spectra (b) before the MZM at the transmitting site, c after the MZM at the transmitting site, d before the WDM DEMUX at the receiving site, e after the WDM DEMUX at the receiving site, f before two RSOAs, and (g) after two RSOAs.EDFA erbium-doped fiber amplifier, DEMUX demultiplexer, PC polarization controller, MZM Mach-Zehnder modulator, OFDM orthogonal frequency-division multiplexing, LO local oscillator, VOA variable optical attenuator, OC optical circulator, MUX multiplexer, PD photodiode, LNA low noise amplifier, DSO digital sampling oscilloscope.
Table 1 |
The chief parameters of the experiment
|
v3-fos-license
|
2023-09-01T06:16:11.523Z
|
2023-08-30T00:00:00.000
|
261395457
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0289808&type=printable",
"pdf_hash": "2b80a8b077c816d4b3cf2f2bc3b6b5cab30b801e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:838",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "9ef23a9953ec7f0a33bbce5c821234a8c1ad8130",
"year": 2023
}
|
pes2o/s2orc
|
Deep learning classification of shoulder fractures on plain radiographs of the humerus, scapula and clavicle
In this study, we present a deep learning model for fracture classification on shoulder radiographs using a convolutional neural network (CNN). The primary aim was to evaluate the classification performance of the CNN for proximal humeral fractures (PHF) based on the AO/OTA classification system. Secondary objectives included evaluating the model’s performance for diaphyseal humerus, clavicle, and scapula fractures. The training dataset consisted of 6,172 examinations, including 2–7 radiographs per examination. The overall area under the curve (AUC) for fracture classification was 0.89, indicating good performance. For PHF classification, 12 out of 16 classes achieved an AUC of 0.90 or greater. Additionally, the CNN model had excellent overall AUC for diaphyseal humerus fractures (0.97), clavicle fractures (0.96), and good AUC for scapula fractures (0.87). Despite the limitations of the study, such as the reliance on ground truth labels provided by students with limited radiographic assessment experience, our findings are in concordance with previous studies, further consolidating CNN as potent fracture classifiers in plain radiographs. The inclusion of multiple radiographs with different views from each examination, as well as the generally unselected nature of the sample, contributed to the overall generalizability of the study. This is the fifth study published by our group on AI in orthopaedic radiographs, which has consistently shown promising results. The next challenge for the orthopaedic research community will be to transfer these results from the research setting into clinical practice. External validation of the CNN model should be conducted in the future before it is considered for use in a clinical setting.
Introduction
Shoulder fractures include fractures of the proximal humerus, clavicle, and scapula.Among these, proximal humerus fractures (PHF) are some of the most common fractures in the elderly population.PHF account for approximately 5% of all fractures and occur most commonly in women [1].Fractures are often caused by minimal trauma, such as a fall from standing height or less [2,3].PHF can be divided into fractures through the tuberosities, metaphysis, surgical, and anatomical neck, in combination with a myriad of fragments.These range from simple extraarticular, unifocal fractures to complex articular, multifocal, and multifragmentary patterns that engage the entirety of the humeral head and metaphysis [4].
Classification of PHF was pioneered by Neer et al., who introduced the PHF 4-part classification based on fracture displacement and fragmentation patterns in 1970 [5,6].The AO Foundation/Orthopaedic Trauma Association (AO/OTA) later introduced the AO/OTA Fracture and Dislocation Classification Compendium, most recently revised in 2018 [4].However, there is no consensus among clinicians and researchers regarding which system is superior, and both systems are widely used and accepted in the orthopaedic community [7].Interobserver agreement (IOA) in PHF classification can vary substantially depending on the skill level of the individual observer, and comparative studies of the Neer and AO/OTA PHF radiographic classifications suggest that IOA in PHF classification is fair to moderate at best, regardless of which classification is used [8][9][10].
Fractures of the clavicle and scapula are less common than humerus fractures, with clavicle fractures constituting 2.6-4% of fractures in adults and scapula fractures accounting for less than 1% of all fractures [11,12].
In recent years, neural network image classifiers have been established as an efficient model for data analysis in orthopaedic research [13].Convolutional neural networks (CNN) have previously proven effective in detecting and classifying fractures in several anatomical locations.Chung et al. demonstrated the potential of a CNN in identifying and distinguishing PHF in plain anteroposterior (AP) radiographs using the Neer classification.When classifying complex fractures, their CNN performed better than experienced orthopaedic surgeons [14].These findings suggest that CNN can detect and classify fractures with accuracy approximating and even surpassing that of human ability.
The aim of this study was to train and evaluate a CNN model for AO/OTA classification of shoulder fractures.
Study design and sample
A total of 7,189 plain radiographic shoulder examinations, conducted between 2002 and 2016, were extracted from Danderyd Hospital Picture Archiving and Communication System (PACS).Examinations were conducted based on all standard indications for routine shoulder radiography at Danderyd Hospital, using standard pathology-specific protocols for radiographic shoulder assessment, where each examination consisted of 2-7 radiographs.Examinations were anonymized during the extraction process and were void of all patient data.Ethical permit was granted by the Stockholm Ethical Review Board, Sweden.Dnr: 2014/453-31/3.The Stockholm Ethical Review Board waived the need for informed consent for this study.In this study, shoulder fractures were defined as fractures of the humerus, scapula, or clavicle; however, humerus fractures were limited to proximal and diaphyseal fractures.
Datasets
The study sample (n = 7,189) was divided into a training (n = 6,221), validation (n = 562), and test dataset (n = 406).No patient overlap was present among the datasets.Examinations were extracted based on radiologist reports indicating radiographic examinations of the shoulder.Specific projections were not considered a criterion for sample extraction.After reviewing initial network classification performance, classes with poor performance were identified.To improve performance in these classes, we used active learning by increasing the number of examinations including the specific fracture class.The PACS database was scanned for radiologist reports containing wordings suggesting the selected fracture types.This introduced possible selection bias and was deemed acceptable to increase prediction precision for all classes.
Labelling of radiographic examinations
The extracted radiographic examinations were uploaded to an in-house developed online labelling platform and were labelled with respect to fractures.The training and validation datasets were labelled by three 4th-year medical students.Radiologist's reports were attached to each examination when available, complementing the students in visual assessment.The labels were considered ground truth.Particularly ambiguous or difficult cases were revisited and reaudited by the medical students, often with senior surgeon supervision.The test set was labelled by four senior shoulder surgeons.
Fracture labels
Fractures in the proximal and diaphyseal humerus, clavicle, and scapula were labelled according to the AO/OTA classification.Radiographs containing more than one characteristic fracture were labelled accordingly, except when two or more groups or subgroups within the same type occurred simultaneously, as the hierarchical structure of the labelling platform did not allow for multiple classes within the same principal fracture type.Fracture labels were applied in a hierarchical manner, with the option of not assigning groups and/or subgroups that could not be determined by the observer.However, the principal fracture type was registered and included in the study sample regardless.
Definitions
Proximal humerus.Proximal humerus fractures are classified into 3 types, 6 groups, and 12 subgroups (Table 1).'Class' was the compound term used for a specific group and/or subgroup and was addressed with the respective fracture code A1.1 for the greater tuberosity fracture.Qualifications and universal modifiers were applied when appropriate.
Diaphyseal humerus.Diaphyseal humerus fractures are classified into 3 types and 7 groups.
Clavicle.Clavicle fractures are classified into 3 locations and 9 types.In this study, only two locations, diaphyseal and lateral fractures, were used.
Scapula.Scapula fractures are classified into 3 locations, 8 types, and 5 groups.Process fractures (location A) were included as a group and not classified according to type.Body fractures (location B) were included and classified into types.
Training the CNN and evaluating CNN classification performance
A modified CNN of the ResNet architecture was used with a total of 35 convolutional layers, with batch normalization for each convolutional layer and an adaptive max pool [15].The network was randomly initialized and trained using stochastic gradient descent.The 6,172 student-labelled examinations were used as training examples in CNN training and were considered ground truth in this setting.The images in the training dataset were separately processed by the CNN for 80 epochs (rounds).Images were scaled down from original size to 256x256 pixels to fit the predefined image framework.The images were additionally randomly cropped, rotated, and inverted.During the training, the model was evaluated using the validation dataset, comprising 562 examinations.The final, adjusted model was evaluated using the test dataset, comprising 406 examinations.
Statistics
Classification performance of the CNN model was evaluated using sensitivity, specificity, Area Under the Receiver Operating Characteristics (ROC) Curve (AUC), and Youden's index (J).AUC is a value between 0 and 1.For this study, we chose to define AUC between 0.7 and 0.8 as "acceptable", AUC between 0.8 and 0.9 as "excellent", and AUC 0.9 or higher as "outstanding" based on an article by Mandrekar on diagnostic test assessment [16].J is defined as J = sensitivity+specificity−1 and describes the maximum potential accuracy of a diagnostic test.All statistical analyses were performed with R, using publicly available MLmetrics packages and OptimalCutpoints for sensitivity, specificity, and J.
Overall model performance
The CNN exhibited excellent overall PHF classification performance with AUC 0.92 (95% CI 0.88 to 0.95) for PHF.It classified 77 PHF with 83% sensitivity, 89% specificity, and J 0.79.A total of 329 examinations were classified as no PHF.The AUC was high (0.9) for all three fracture types.The most accurate class-specific predictions were found in the multifragmentary surgical neck fracture (subgroup A2.3) class with AUC 1.0 (95% CI 0.99 to 1.00) and J 0.99.The least accurate class-specific predictions were in the A2.1 class, with mean AUC 0.73 (95% CI 0.29 to 1.0), J 0.61.The predictive accuracy in all classes is displayed in Table 2.
A total of 406 examinations were used in the test set to evaluate the CNN classification performance, with 10 out of 13 PHF classes represented.Model performance for the different anatomical areas is summarized in Fig 2.
Articular or 4-part fracture (type C). 12 type C fractures were detected with 92% sensitivity, 81% specificity, and J 0.73.Predictive accuracy was good, with AUC 0.90 (95% CI 0.83 to 0.97).The most accurate type C classification predictions were found in the anatomical neck fracture with a multi-fragmentary metaphyseal segment with articular fracture class
Diaphyseal humerus fractures.
The training data included 216 diaphyseal humerus fractures, the validation data 39, and the test data 40.The overall precision for diaphyseal fractures was excellent with an AUC of 0.97 (95% CI 0.94 to 0.99).The range for AUC for fracture type and group was 0.88 to 0.97 (Table 3).
Clavicle fractures
The training data included 749 clavicle fractures, the validation data 87, and the test data 51.The overall precision for clavicle fractures was excellent with an AUC of 0.96 (95% CI 0.92 to 0.99).The range for AUC for fracture type and group was 0.82 to 0.98 (Table 4).
Scapula fractures
The training data included 243 scapula fractures, the validation data 87, and the test data 12.
The overall precision for scapula fractures was good with an AUC of 0.87 (95% CI 0.92 to 0.99).The range for AUC for fracture type and group was 0.74 to 1.00 (Table 5).
Other analyses
The interrater reliability was analysed using Cohen's Kappa and was 0.88 overall, for PHF 0.89, for diaphyseal fracture 0.88, for clavicle fractures 0.92, and for scapula fractures 0.71 [17].
Discussion
In this paper, we present a deep learning model for fracture classification on shoulder radiographs.The overall AUC for fracture classification was 0.89, a good result by any standards.The results for PHF classification were even more impressive, with 12 of 16 classes achieving an AUC of 0.90 or greater.
To our knowledge, this is the first report to evaluate classification performance of a CNN classifier using the AO/OTA classification of PHF.Our CNN model demonstrated high classification accuracy for most fracture types, and our findings are in concordance with the few previous studies on the applications of AI networks as fracture classifiers [14,[18][19][20].The findings presented here further consolidate CNN as potent fracture classifiers in plain radiographs.
In addition, the CNN model had excellent overall AUC for diaphyseal humerus fractures (0.97) and for clavicle fractures (0.96), and good AUC for scapula fractures (0.87).
When compared to other studies on AI-assisted fracture classification, our study included a sample of 6,172 examinations in training, which includes 2-7 radiographs per examination, resulting in over 12,000 images.In comparison with other studies, our training set is twice the size of Urakawa and three times the size of Chung et al. [14,21].
Strengths and limitations
A major strength of this study is the large study sample and the inclusion of multiple radiographs with different views from each examination.We tried to include as many radiographs as possible, and not excluding radiographs with suboptimal projection.In the clinical setting, radiograph quality is seldom perfect and depends on patient compliance.AI models for clinical use must be trained on data that represent the clinical reality to be useful.Furthermore, the study sample did not only include fracture radiographs.Including non-fracture radiographs is important for models that are aimed for use in the clinical setting where fractures will only be present in a minority of radiographs.A clinically useful AI model must be able to perform well despite the skewed distribution of fracture and fracture classes and be able to identify rare fracture classes.Our msodel performed well with AUC over 0.90 on several classes that only contained 3 cases.
One contributing factor to the high performance of our model might be the inclusion of several rather than singular radiographs.This further resembles clinical reality, where several radiographs are often assessed simultaneously in each examination.Another contributing factor to the model performance was not limiting training data to a specific projection, which provided the CNN with additional training data-a concept we considered beneficial.Additionally, the generally unselected nature of the sample, combined with a wide spectrum of injuries, further contributes to overall generalizability.The AO/OTA classification system is complex and may be more applied in research settings than in clinical settings.The use of AI-assisted classification models could potentially amplify the clinical adoption of the AO/OTA classification, while simultaneously enhancing clinicians' understanding of this classification system.
This study has several limitations.The ground truth labels in the training and validation datasets were provided by students with limited previous experience in plain shoulder radiograph assessment.This fundamental limitation permeates the study results because the CNN accuracy depends on the accuracy of the training data.A few measures were taken to reduce this influence.First, the validation data used to evaluate CNN classification performance was double audited.Second, students collaborated with a senior orthopaedic surgeon specialized in shoulder surgery, revisiting and reviewing complex and ambiguous cases.The test set was reviewed separately by two senior shoulder surgeons, and cases with discrepancies between the reviewers were handled through consensus sessions.During the consensus sessions, the surgeons were blinded to who had suggested the original classes.
Reliability of ground truth labels could have been further improved by having senior orthopaedic surgeons revisit and review all cases.Furthermore, having observers with previous experience in radiographic assessment, such as radiologists or orthopaedic surgeons, provide the labels might have improved the accuracy of labels in the training data.In extension, such an approach would contribute to the overall reliability and generalizability of the study.
Several subgroups were not represented in the training data, and classification performance could not be evaluated in all AO/OTA PHF subgroups.By expanding the active learning selection bias, more subgroups could have been included in the training and validation datasets, enabling evaluation of classification performance in more of the AO/OTA PHF subgroups.This is a single-centre study, and external validation of the CNN model should be done in the future before the model is considered for use in the clinical setting (Carmo et al. 2021).This is the fifth study published on AI in orthopaedic radiographs by our group.The previous studies have demonstrated that AI can effectively utilise clinical radiographs and classify fractures with a classification system that was not primarily designed for AI use.These findings provide a promising indication that AI models may be implemented in clinical practice in the near future.
total of 6,783 radiographic examinations were included and divided into a training dataset (n = 6,221), a validation dataset (n = 562), and a test dataset (n = 406).The numbers of the different types of shoulder fractures and their distribution in the different datasets are displayed in Fig 1. Proximal humerus fractures Distribution of fractures in the training data.Most fractures belonged to extraarticular, unifocal, 2-part fracture (type A, n = 465) followed by extraarticular, bifocal, 3-part fracture (type B, n = 60) and articular or 4-part fracture (type C, n = 47).All 6 groups and 11/12 subgroups were represented in the training data.Tuberosity fracture (A1) was the most common group (n = 374), and isolated greater tuberosity fracture (A1.1) was the most common subgroup (n = 370).Surgical neck fracture with lesser tubercle fracture (B1.2) was the only subgroup not represented in the training data.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2012-04-17T00:00:00.000
|
17073909
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/ecam/2012/145904.pdf",
"pdf_hash": "cdbfa6de369735095f6de9eee7f4a68807a6d7da",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:844",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "bdf3319e8697db313b7f7f7b23d782cd23a7c0a8",
"year": 2012
}
|
pes2o/s2orc
|
Biomedical Teleacupuncture between China and Austria Using Heart Rate Variability—Part 2: Patients with Depression
It has been shown in previous studies that the autonomic nervous system can be affected by acupuncture. Within this study, teleacupuncture between China and Austria is used for quantifying the effects of heart rate (HR) and heart rate variability (HRV) in 33 Chinese patients (27 females, 6 males; mean age ± SD 49.5 ± 13.1 years; range 22–72 years) suffering from depression. Electrocardiographic signals before, during, and after acupuncture at the acupoint Baihui (GV20) were recorded in Harbin and analyzed in Graz using teleacupuncture. HRV data were analyzed in the time and frequency domain. Mean HR decreased significantly (P < 0.05) during and after acupuncture, whereas total HRV increased significantly after the third acupuncture stimulation period (P < 0.05) and also 5–10 minutes after (P < 0.05) acupuncture. The study shows that HRV could be a useful parameter for quantifying clinical effects of acupuncture on the autonomic nervous system.
Introduction
A recent Cochrane review identified 30 randomized controlled trials (RCTs) that evaluated manual acupuncture, electroacupuncture, or laser acupuncture in 2812 patients with major depressive disorder [1]. In this study, no consistent benefit was noted with any form of acupuncture [1]. However, our research group found acute stimulation effects on neurovegetative parameters like heart rate (HR) and heart rate variability (HRV) in patients with depression [2] and insomnia [3] and poststroke patients [4].
An innovative concept of the current teleacupuncture technology has been implemented at the Traditional Chinese Medicine (TCM) Research Center Graz in Austria (http:// litscher.info/ and http://tcm-graz.at/) in 2010 in cooperation with different institutions in China over a distance of several thousands of kilometres [5][6][7].
This paper describes the second results from teleacupuncture measurements in patients with depression using computer-based HRV recordings before, during, and after acupuncture under standardized clinical conditions in China. The first study in patients with depression was performed using the acupuncture point Jianshi (PC5) [2] and the present study the acupoint Baihui (GV20). All analyses were performed in Graz, Austria [5].
Patients.
Thirty-three patients (27 females, 6 males; mean age ± SD 49.5 ± 13.1 years; range 22-72 years) suffering from depression (Chinese diagnosis "Yu Zheng") and therefore receiving acupuncture treatment were investigated at the Heilongjiang University in Harbin. Similar to our first study the clinical evaluation of the patients was performed immediately before HRV data recording using three main scales: the Hamilton rating scale for depression (HRSD) [8], the Hamilton anxiety rating scale (HAM-A) [9], and the Athens insomnia scale (AIS) [10]. No patient was under the influence of centrally active medication. The study was approved by the ethic committee of the Heilongjiang University of Chinese Medicine (no. 2010HZYLL-030) and carried out in compliance with the Declaration of Helsinki. All patients gave oral informed consent.
Biosignal Recording in Asia and Data Analysis in Europe.
The duration of RR intervals is measured during a special time period (5 min), and on spectral analysis basis HRV is determined. Electrocardiographic (ECG) registration is performed using three adhesive electrodes (Skintact Premier F-55; Leonhard Lang GmbH, Innsbruck, Austria), which are applied to the chest.
The researchers in China used a medilog AR12 HRV (Huntleigh Healthcare, Cardiff, UK) system from the TCM Research Center at the Medical University in Graz for the joint investigations. This system has a sampling rate of 4096 Hz and can therefore detect R waves extremely accurately [11]. The raw data are stored digitally on a CompactFlash (CF) 32 MB memory card. After removing the card from the portable system, the data were read by a card reader connected with a standard computer in China and then transferred to the TCM Research Center Graz via internet. With a new software [5][6][7] the biosignals were analyzed and HRV was displayed in a way to help to judge the function of the autonomic nervous system. Viewing this innovative kind of analysis helps to show how well the human body reacts to sport, stress, recovery, and also acupuncture [2][3][4][5][6][7]12].
Similar to a previous studies [2][3][4], mean HR, total HRV, and the LF (low frequency)/HF (high frequency) ratio of HRV were chosen as evaluation parameters, as such being recommended by the Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology [13].
Clinical Acupuncture and Procedure.
All 33 patients received manual needle acupuncture at the acupoint Baihui (GV 20) on the head ( Figure 1). Baihui is located 5 cun directly above the midpoint of the anterior hairline, at the midpoint of the line connecting the apexes of both ears. Its use is indicated, for example, in neurological diseases like depression, headache, dizziness, epilepsy, and mania [14]. Sterile single-use needles (0.30 × 25 mm; Huan Qiu, Suzhou, China) were used. Needling was performed horizontally (angle 15 • , depth about 1 cun), and the needle was stimulated clockwise and counterclockwise for 15 seconds each, with six rotations per second, resulting in 90 rotations per stimulation. Stimulation was done immediately after GV20 Baihui inserting the needle, 10 minutes later and before removing the needle (cf. Figure 2 and [2]).
Statistical Analysis.
Data of the 33 patients from the 2nd Neurological Department from the Heilongjiang University of Chinese Medicine in Harbin were analyzed using SigmaPlot 11.0 software (Systat Software Inc., Chicago, USA). Graphical presentation of results uses box plot illustrations. Testing was performed with Friedman repeated measures ANOVA on ranks and Tukey test. The criterion for significance was P < 0.05. Figure 3 shows the results of mean HR from the ECG recordings before, during, and after acupuncture of the 33 patients with depression. There was a significant decrease in HR during the second half of the acupuncture phase and after acupuncture (P < 0.05).
Results
In contrast to this decrease in HR, total HRV increased significantly (P < 0.05) only after finishing the third and last needle stimulation (Figure 4, phase g). This increase was still present at the end of the measurement procedure in comparison to the second phase (Figure 4, phase h; P < 0.05). It is interesting that between the stimulation phases total HRV was lowered again, with the median continually increasing with respect to the previous nonstimulation phase.
Evidence-Based Complementary and Alternative Medicine Insignificant changes were found in the LF/HF ratio during acupuncture and can be seen in Figure 5.
The results of the different scales as described in Section 2 showed the following mean ± SD values: HRSD 20.3 ± 4.1; HAM-A 19.4 ± 4.4; AIS 12.7 ± 4.8.
Discussion
Depression is one of the most prevalent and fastest-growing diseases in both western and eastern worlds. New-generation antidepressants appear more effective than older drugs; however, many drugs have side effects that can affect compliance and morbidity [1]. From the point of view of TCM the main syndromes of depression are qi stagnation, and blood stasis, liver qi depression and transformation of fire due to qi stagnation [15]. In China, there are several preclinical and clinical studies using Chinese herbal medicine, which are the basis for design of new therapeutic programmes for treatment of depression [15]. In addition, acupuncture is also used in several evidence-based studies concerning this topic of research.
Although there are a great number of referenced publications (see Section 1 and http://www.pubmed.gov), there are only seven articles (including our first study on the topic [2]) concerning depression, acupuncture, and HRV at the moment (January 2012). These publications should be discussed in context with the results of this study in the following [2]: in 2001, Callahan [16] stated that HRV has been shown to be a strong predictor of mortality and is adversely affected by problems such as anxiety and depression. Pignotti and Steinberg [17] demonstrated that a lowering of subjective units of distress was in most cases also related to an improvement in HRV. In the third paper in 2001, Sakai et al. [18] included HRV in a general concept of behavioural health services, and the authors reported HRV as a useful parameter. In 2003, Agelink et al. [19] also undertook a study to evaluate the effects of needle acupuncture on cardiac autonomic nervous system function in patients with minor depression or anxiety disorders. In contrast to our 33 patients, the 36 patients from that group were randomly distributed into a verum acupuncture group and a placebo group. Similar to our investigations, 5minute intervals of ECG were analyzed and the acupuncture group also showed a significant decrease of the mean resting heart rate, 5 and 15 minutes after needle application (cf. Figure 3). In the study by Agelink et al. [19], this effect was only significant in verum acupuncture in patients with minor depression or anxiety. Therefore, a relative increase of cardiovagal modulation of heart rate and physiological regulatory effects due to acupuncture stimulation could be detected in the present study, which confirms the results of other authors [19], although the acupuncture schemes were different (He.7 Shenmen and PC7 Neiguan [19] versus PC5 Jianshi [2] and GV20 Baihui (this study)).
In a further publication, Yun et al. [20] described in 2005 the dynamic range of biologic functions. They stated that reduced variation of physical exertion, environmental stressors, and thermal gradients that characterize modern life styles may reduce the autonomic dynamic range resulting in lowered HRV and a myriad of systemic dysfunctions. Acupuncture may operate through increasing autonomic variability.
As already mentioned in the previous part (part 1) of this study [4], a systematic clinical review on acupuncture and HRV was published by Lee et al. in 2010 [21], which searched the literature using 14 data bases. Twelve RCTs met all inclusion criteria. Five RCTs found significant differences in HRV between patients treated with acupuncture and those treated with sham acupuncture (controls). The majority of the other RCTs showed inconsistent results [21]. The authors stated that more rigorous research appears to be warranted. The number, size, and quality of the RCTs that are available are too low to draw firm conclusions [21]. Another review article concerning the topic of HRV and acupuncture was published by our research group already in 2007 [12]. In this paper, it could be demonstrated that in special syndromes like fatigue and stress one can counteract the aging process using different preventive methods like acupuncture [12]. This was demonstrated in recent investigations concerning patients with burn-out syndrome as performed in a further teleacupuncture study between Beijing and Graz [5,22].
The following conclusions can be drawn from the present clinical teleacupuncture study in patients with depression.
(2) Total HRV increased significantly during and after acupuncture stimulation at Baihui.
(3) We have shown that teleacupuncture at the acupoint Baihui in patients with depression shows similar effects in neurovegetative parameters like acupoint stimulation of Jianshi (PC5). In both studies ( [2] and present study) the same technique in different patients but with the same disease was used.
|
v3-fos-license
|
2019-01-02T02:15:28.480Z
|
2015-03-31T00:00:00.000
|
76654583
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=55992",
"pdf_hash": "e763309b1932344b0d33454d789097ad3a5fdf1a",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:846",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "e763309b1932344b0d33454d789097ad3a5fdf1a",
"year": 2015
}
|
pes2o/s2orc
|
Gravitation and Electromagnetism Conciliated Following Einstein ’ s Program
The Einstein’s program permits to conciliate gravitation and electromagnetism. Besides the standard model, it forms a consistent system for universe description, founded upon a scalar field propagating at the speed of light c. Matter corresponds to standing waves. Adiabatic variations of frequencies lead to electromagnetic interaction constituted by progressive waves. Classical domain corresponds to geometrical optics approximation, when frequencies are infinitely high, and then hidden. As interactions for matter, Gravitation and Electromagnetism derive from variations of its energy E = mc2. Electromagnetic interaction energy derives from mass variation dE = c2dm, and gravitation from speed of light variation dE = mdc2. Contrarily to gravitation, only electromagnetic interaction serves as a bridge between classical and quantum frames, since it leans directly upon the wave property of matter: its energy dE = hdν = c2dm derives from variations of matter energy E = hν = mc2.
Introduction
The conciliation of gravitation with electromagnetism is one of the most present resisting problems in physics.The main difficulty lies in the fact that, until now, gravitation is still described by general relativity, in a classical and determinist framework, while electromagnetism, incorporated in the standard model, is described by a quantum field, in a probabilistic framework.
For the physicists, the whole universe is nowadays theoretically described by the standard model, which forms a consistent system.It is constituted by matter interacting through three different kinds of forces.All are composed of fundamental particles which derive from relativist quantum fields, and behave either as waves or as particles.The standard model has been validated in 2012 by the B.E.H, or Higgs, boson detection, representing its crowning.Since it does not include gravitation, it describes only a partial aspect of the universe.It is admitted as posterior to Planck's era.
By comparison, gravitation is well described by general relativity, based on a continuous field [1]- [5].It has been largely confirmed by numerous experiments and by its theoretical consequences and practical applications.The graviton, as the quantum particle mediating for gravitation interaction, has not yet been detected and validated [6] [7].Consequently, until having proof to the contrary, gravitation remains well described by general relativity, in a classical framework.
In extension of general relativity and of his different discoveries, including in quantum physics, such as the stimulated emission, Einstein had proposed a consistent approach for physics, symmetrical to the standard model [1].He privileged a continuous field, leaning on physical representations of phenomena, before their more precise mathematical description.
It has been supported, and validated, by the International Legal Metrology Organization.In one hand, the speed of light in vacuum is admitted as a "pure", or primary, fundamental constant in experimental physics, with its numerical value strictly fixed.In other hand, the standard for measures of time is based on the period an electromagnetic oscillation.
In a previous article [8], we showed how the Einstein's program forms a consistent system for universe description, beside the standard model.It allows us to complete the universe grasp, like both eyes give us access to tridimensional vision, or both ears to stereophonic audition.It founds upon a scalar field propagating at light velocity.Matter corresponds to standing waves, and electromagnetism, as a quantum interaction, to their adiabatic variations.Classical domain restricts to the geometrical optics approximation, when frequencies are infinitely high, and then hidden [9]- [12].
In this article we propose to show how the Einstein's program permits to conciliate gravitation and electromagnetism.Since both act as interactions of matter, they derive from variations of its energy E = mc 2 .Electromagnetic interaction energy corresponds to the mass variation dE = c 2 dm, while gravitation energy is linked to variation of the light velocity dE = mdc 2 .Contrarily to gravitation, only electromagnetic interaction energy dE = hdν = mdc 2 , derives directly from the wave character of matter with E = hν = mc 2 , as an adiabatic variation.
History
The historical development of interaction properties of matter with gravitation, and of interaction properties of charges with electromagnetism, showed from the beginning, how they were all closely linked together.
Gravitation was the first interaction, discovered and formalized in the 17 th century.The Newton's attractive force F = Gmm'/r 2 (1) exerted between two localized masses m, m', separated by a distance r, introduced in physics the concept of force, applied respectively to point-like gravity centers, together with the concept of particle.Afterward, one century later, the problem of harmonizing electromagnetism began to arise, when the Coulomb's force between electric charges q, q', revealed similar to (1), beyond the fact that it is attractive or repulsive, depending the signs of the charges.Both equations form parts of the same Newtonian field, acting instantaneously between two point particles in vacuum.In addition, the charges have necessarily matter as support.Special Relativity replaced the instantaneous action at a distance between particles, by an action propagating at speed of light c, emphasizing that it occurred in vacuum.Henceforth, the speed of light plaid a fundamental role in physics, particularly in the space-time framework, as a link between space and time coordinates.
However, in spite of its extension feature, in Einstein's equation of General Relativity, the local variations of space-time properties, characterized by the metric tensor g ij , leading to a curvature R, prevented to maintain it still empty.(Despite its usefulness, we did not consider the cosmologic constant Λ, since the additional term g ij Λ may figure either in any side, according to its physical consequences).The left side of (3), which is the principal feature of the theory, describes the gravitation properties of space-time, through a classical continuous field propagating at light velocity c, as resulting from the tensors g ij and its derivative R ij .They arise themselves from the matter-energy tensor T ij of the right side, acting as sources, globally in motion with a speed v strictly inferior to c, and gathering different phenomenological and theoretical properties of matter-energy, through masses and interactions.Einstein's equation law for gravitation (3) derives directly as an extension of Newton's law (1).(It is known that (1) arises as an approximation of (3), remaining largely sufficient for usual terrestrial, and even spatial, applications for moving matter.It becomes insufficient for GPS because it concerns electromagnetic rays propagation).Nevertheless, passing over from (1) to (3) was conditioned by the transformation of Coulomb's equation (2) on behalf of static Poisson's equation ∆V = −4πρ for electricity, after introducing a space-distributed potential V in place of force F, and continuous charge density ρ, in place of point-like charge q.It led to the Maxwell's equations.According to Einstein, "The formulation of these equations is the most important event in physics since Newton's time, not only because of their wealth of content, but also because they form a pattern for a new type of law… The characteristic features of Maxwell's equations, appearing in all other equations of modern physics, are summarized in one sentence.Maxwell's equations are laws representing the structure of the field."[2].Nowadays, it still appears that, "One could believe that it would be possible to find a new and secure foundation for all physics upon the path which had been so successfully begun by Faraday and Maxwell.Accordingly, the revolution begun by the introduction of the field was by no means finished" [1].
At the present time, in view of physics unification into the standard model of particles, gravitation remains, after almost one century of efforts, the last one to be quantified, in order to rejoin the three others.Einstein's Equation ( 3) gathers together separately in either side, without fusing them, not only gravitation and electromagnetism but also opposite entities, like fields propagating at light velocity c, and localized matter-energy.This is why, despite of his awareness of general relativity achievement, Einstein was "dissatisfied with the dualism of a theory admitting two kinds of fundamental physical reality: on the one hand the field and on the other hand the material particles.It is only natural that attempts were made to represent the material particles as structures in the field, that is, as places where the fields were exceptionally concentrated.Any such representation of particles on the basis of the field theory would have been a great achievement...This theory having brought together the metric and gravitation would have been completely satisfactory of the world had only gravitational fields and no electro-magnetic fields.Not it is true that the latter can be included within the general theory of relativity by taking over and appropriately modifying Maxwell's equations of the electro-magnetic field, but they do not then appear like the gravitational fields as structural properties of the space-time continuum, but as logically independent constructions.The two types of field are causally linked in this theory, but still not fused to an identity."[1].
The Einstein's Program
In extension of general relativity and of his different discoveries, including in quantum physics, such as the stimulated emission, Einstein had proposed a consistent approach for physics, which appears at the present time, as symmetrical to the standard model: "We have two realities: matter and field… We cannot build physics on the basis of the matter concept alone.But the division into matter and field is, after the recognition of the equivalence of mass and energy, something artificial and not clearly defined.Could we not reject the concept of matter and build a pure field physics?…We could regard matter as the regions in space where the field is extremely strong.In this way a new philosophical background could be created… Only field-energy would be left, and the particle would be merely an area of special density of field-energy.In that case one could hope to deduce the concept of the mass-point together with the equations of the motion of the particles from the field equationsthe disturbing dualism would have been removed… One would be compelled to demand that the particles themselves would everywhere be describable as singularity free solutions of the completed field-equations.Only then would the general theory of general relativity be a complete theory."[1].
As a general manner, new technologies evolve in accordance with the Einstein's program, when they are substituting, progressively and almost systematically, mechanical devices by electronic devices, based upon electromagnetic field in place of matter.For instance, instead of printing documents on paper, they are rather numerically recorded.What is more specific is that, decades after the Einstein's program was set, physicists had begun to bring it into effect, when they replaced international standards of length and time, based on matter since two centuries, by electromagnetic standards, based on the period of a continuous field propagating at the speed of light.As far back as 1905, when Einstein established special relativity theory, he used a light ray, and not a material rod, to measure the distance of a moving body.He anticipated the international standard of length adopted in 1960 by the International Legal Metrology Organization.Now it derives from the second, defined by the radiation period of the cesium 133 atom, and by, the speed of light in vacuum, admitted as fundamental, with its numerical value strictly fixed.This allows to measure durations with 10 −18 precision.Such measures, carried out by electromagnetic frequencies reduction ratio, are the more precise in physics at the present time [13] [14].
Thus, not only the Einstein's program gave numerous proofs of its validity, but it presents itself as precise means to investigate the problem of gravitation and electromagnetism conciliation.The more especially as gravitation has strongly resisted to its quantification, since almost one century.For this purpose we point out two of their main characteristic features.
The first one is explicit in the program, and was emphasized by Einstein since 1905 in special relativity: the speed of light c.Its basic role in whole experimental and theoretical physics has been legally confirmed in international standards, as a "pure" or primary fundamental constant, with its value numerically fixed.It is the speed of propagation in vacuum for gravitation and electromagnetism interactions.On another hand, the legal standard of time leans on a frequency of oscillation of a field propagating at the speed of light.
Standing Field Kinematics
In previous works [15] [16], we showed how kinematic properties for standing waves of a scalar field propagating at light velocity c, with constant frequency ω and velocity v, are formally identical with mechanic properties of isolated matter.The Lorentz transformation, which plays a fundamental role in special relativity, is specific of standing waves.
The geometric properties of standing waves are described by the function of space u(k 0 x 0 ), obeying Helmholtz's equation ∆ 0 u 0 + k 0 2 u 0 = 0. Its solutions verify Bessel spherical functions, and particularly its simplest elementary solution, with spherical symmetry, finite at origin of the reference system, and representing a lumped function, u 0 (k 0 r 0 ) = (sink 0 r 0 )/(k 0 r 0 ), In geometrical optics approximation, when the frequency is very high and tends towards infinity ω 0 = k 0 → ∝, the space function u 0 tends towards Dirac's distribution u 0 (k 0 r 0 ) → δ(r 0 ).The standing wave of the field behaves as a free classical material particle isolated in space.
From a kinematical point of view, the central extremum of an extended standing wave, either at rest or in motion, is appropriate to localize its position x 0 , exactly like the centre of mass for a material system.It verifies The four-dimensional Minkowski's formalism traduces invariance properties of standing waves at rest, when they move uniformly.Confirmation is found into invariant quantities obtained from four-quantities, such as coordinates x µ x µ = x 0 2 or x µ x µ = c 2 t 0 2 , and functions u µ u µ = u 2 (x 0 ) or ψ µ ψ µ = ψ 2 (t 0 ).Their space-like or time-like characters are absolute, depending of their refering quantities defined in the rest system, in which the separation with respect to space or time occurs.
In order to point out their constant frequency, we express them as In special and general relativity, the equations are based on particles, as singularities, moving on trajectories.They lean then directly upon geometrical optics approximation.The periodic equations, generic of standing fields, are hidden.The space coordinates x α , involved in the metric, are point-like dynamical variables, and not field variables r which would describe an extended repartition in space.
Standing Field Dynamics
All above equations are unlimited with respect to space and time, since x or t may become infinite.Usually, one imposes boundary conditions, in which matter acts either as a source fixing the frequency ω, or as a detector annealing it, as well as a geometrical space boundary fixing the wavelength λ through k = 2π/λ.This is not felicitous from relativistic consistency, since space and time operate separately.In addition, matter is heterogeneous with regard to field.In order to remain in homogeneous frame, we rather consider boundaries provided by wave packets.Two progressive waves with different frequencies ω 1 , ω 2 propagating in the same direction at light velocity, give rise to a wave packet propagating in the same direction at light velocity, with a main wave with frequency ω = (ω 1 + ω 2 )/2, modulated by a wave with frequency βω = (ω 1 − ω 2 )/2 = ∆ω/2 = ∆kc/2 and wavelength Λ = 2π/βk and period T = Λ/c.Since β < 1, the modulation wave acts as an envelope with space and time extensions ∆x = Λ/2, ∆t = T/2, leading to well known Fourier relations ∆x∆k = 2π and ∆t∆ω = 2π.
Then, Fourier relations represent homogeneous boundary conditions for the scalar field ε.From a physical point of view, they must be compulsory associated with the d'Alembertian's Equation ( 4) in order to complete them, and to emphasize that the field cannot extend to infinity with respect to space and time.
When the frequencies difference βω = (ω 1 − ω 2 )/2 = ∆ω/2 << ω is very small, it can be considered as a perturbation with respect to the main frequency, βω = δω.Then a wave packet can be assimilated to a progressive monochromatic wave with frequency Ω = ω ± δω, inside the limits fixed by the component frequencies ω 1 = ω + δω and ω 2 = ω − δω.By difference with standing waves frequencies, which must be constant and monochromatic, progressive fields solutions of (4), may be more complex, with frequencies varying with space and time.An almost monochromatic wave is characterized by a frequency Ω(x, t), varying very slowly around a constant ω From a physical point of view, we recognize the definition of an adiabatic variation for the frequency [17].We can then expect that all following properties of almost fields occur inside such a process.Instead of admitting constant frequencies ω of elementary waves propagating all over space-time as given data, we rather consider that it represents the mean value, all over the field, of different varying frequencies Ω(x, t).In other words, the modulation waves with perturbation frequency δΩ(x, t), propagating at light velocity, behave as interactions between main waves, leading that their frequency ω remains practically constant all over the space-time.
C. Elbaz
proximation.This is equivalent to incorporate, in almost monochromatic solutions, the boundary conditions defined by Fourier relations.
They lead to dynamical properties for energy-momentum conservation, and to least action principles, for standing fields and almost standing fields [9]- [12].
For a standing wave, either at rest or in motion, the frequency is constant δΩ(x, t) = 0, so that (15) reduces to where − is a four-dimensional vector.This continuity equation for u 2 is formally identical with Newton's equation continuity for matter-momentum density ∂µ/∂t + ∇µv = 0. with u 2 = µc 2 , ( We are then led to admit, by transposition, that u 2 represents the energy density of the standing field.Following relations ( 8) and ( 9), in the spherical symmetry case, and for its kinematical behavior, the space function u 0 can be reduced to its point-like centre of energy density whose position x 0 is such that Since u 2 is a standing wave energy density spread in space, and then a potential energy density, −∇u 2 = −∇w P = F is a density force, and ∂u 2 v/c 2 ∂t a density momentum.Then π µν is a four-dimensional force density.
Equation (18), where energy density w µ is a four-dimensional gradient ∂ µ a, is mathematically equivalent to the least action relation When transposing the mass density µ = u 2 /c 2 , and taking into account the two identities ∇P 2 = 2(P∇)P + 2P × (∇ × P) and dP/dt = ∂P/∂t + (v∇)P for c and v constant, after integration with respect to space, we get the equation for matter dp/dt = −∇mc 2 + {∇(mv) 2 }/2m dp/dt = ∇L m = −∇m 0 c 2 ( ) We retrieve the relativistic Lagrangian of mechanics for free matter L m = −m 0 c 2 ( )
Electromagnetism
For of an almost standing wave, the continuity equation applies that the total energy density W = U 2 Ω = w + δW, is sum of the mean standing wave w and of the interactions δW.The relations (18) become By difference with the null four-dimensional density force π µν for a standing wave, only the total density force Π µν for an almost standing wave vanishes.In the first case, this asserts the space stability of an isolated standing wave, while in the second case, the space stability concerns the whole almost standing wave.It behaves as a system composed of two sub-systems, the mean standing field with high frequency Ω(x, t) ≈ ω, and the interaction field with low frequency δΩ(x, t), each one exerting an equal and opposite density force π µν = −δΠ µν against the other.
In (18), the vanishing four-dimensional force density tensor π µν of a standing wave, asserts that the energy-momentum density four-vector w µ is four-parallel, or directed along the motion velocity v.By comparison, for an almost standing wave, the total energy-momentum density tensor Π µν which still vanishes, asserts also that the total energy-momentum density four-vector W µ is four-parallel, or directed along the motion velocity v.However, the mean energy-momentum density tensor π µν , no longer vanishes in (21) as previously in (18): the mean energy-momentum density four-vector w µ is then no longer parallel.This comes from the opposite density force δΠ µν exerted by the interaction.
It appears that an almost standing wave behaves as a whole system in motion which can be split in two sub-systems, the mean standing wave and the interaction field.Both are moving with velocity v, while exerting each other opposite forces in different directions, including perpendicularly to the velocity v.The perturbation field, arising from local frequency variations δΩ(x, t), introduces orthogonal components in interaction density force and momentum.
The relations (20), generalized by the constants variation method for the mass M(x, t) = m ± δM(x, t), become The density force δΠ µν ≠ 0 exerted by the interaction is formally identical with the electromagnetic tensor F µν = ∂ µ A ν − ∂ ν A µ ≠ 0. We can set them in correspondence δΠ µν = eF µν , through a constant charge e, in which δM(x, t) = eV(x, t)/c 2 and δP(x, t) = eA(x, t)/c.The double sign for mass variation corresponds to the two signs for electric charges, or to emission and absorption of electromagnetic energy by matter.We retrieve the minimum coupling of classical electrodynamics, P µ (x, t) = p µ + eA µ (x, t)/c, with M(x, t)c 2 = mc 2 + eV(x, t), and P(x, t) = p + eA(x, t)/c where electromagnetic energy exchanged with a particle is very small with respect to ist own energy eA µ (x, t)/c = δP µ (x, t) << p µ [18].Electromagnetic interaction is then directly linked to frequencies variations of the field ε.
From (22) we derive then the relativistic Newton's equation for charged matter with the Lorentz force dP/dt = −∇m 0 c 2 ( )
Adiabatic Invariance
For an almost standing wave, in place of ( 16), we get from ( 13) and (15), to first order approximation, where W = w ± δW = µc 2 = µc 2 ± δµc 2 is energy density, W ν = w ν ± δW ν = (µc 2 , µvc) four-dimensional energy density, Ω = ω ± δΩ the frequency and Ω ν = (Ω, Ωv/c) the four-dimensional frequency.These relations imply that W = IΩ and W ν = IΩ ν (25), when we take into account the double sign in frequency variation δΩ.The constant I is an adiabatic invariant density.In first approximation, they reduce to energy-momentum densities, and to their variations, relations w ν = Iω ν or µc 2 = Iω µv = Iβk (26) Integrations of µ and I densities with respect to space, lead to relations between four-energy and four-frequency through the adiabatic invariant H. Since the Planck's constant h behaves as an adiabatic invariant [17], these relations show h proximity with electromagnetism, more especially as they both lean upon slight frequency variations.However, their rigorous connections remain unsolved since h applies to all particles with different masses, while this does not seem to occur for H, after integration of I with respect to space.Consequently, even if, from historical point of view, Planck's constant h was introduced in direct connection with electromagnetism, through For matter at rest, and in uniform motion, interaction energies of electromagnetic charged matter dE m and of gravitation dE G , derive from its total energy E 0 = M 0 C 2 .However, contrarily to gravitation, only electromagnetic energy is quantified dE m = hdΩ, according to (28).
Concluding Remarks
Following Einstein's program, founded on a scalar field propagating at speed of light, one can derive main physical properties of matter and of gravitational and electromagnetic interactions.Matter corresponds to standing waves, while interactions correspond to progressive waves.When frequencies are infinitely high, they render oscillations inaccessible with time, since they are too rapid, and inaccessible with space, since the wavelengths are too small.Only mean effects appear.Physical phenomena exhibit then, theoretically and experimentally, as particles.Classical relativist equations of mechanics correspond to geometrical optics approximation.
In mechanics and electromagnetism domains, the very slight local variations, or local adiabatic variations, of almost standing waves frequency, lead to variations of energy density, or to equivalent mass density, while the field velocity c and motion velocity v = dx/t, are locally constant.The underlying invariance structure with respect to motion, is expressed by local Lorentz tranformation, with invariant interval ds 2 = c 2 dt 0 2 = c 2 dt 2 − dx 2 .We retrieve then the main classical relativist relations for matter, such as the variational principle and the energy-momentum conservation laws, and particularly its energy E = mc 2 .The variations of frequencies lead to the quantum relation E = hν for matter (second quantification), and dE = hdν for electromagnetic interaction (first quantification), as well as to Fourier relations, homogeneous to the field, leading to the Heisenberg relations, homogeneous to matter.They lead also to an interaction which is formally identical with electromagnetism.
The variations of light velocity lead to an interaction, formally identical with gravitation.In gravitational domain, the whole equivalent mass of an almost standing wave, or the total mass of matter, including interaction energy, is submitted to local variations of the field velocity C(x, t) and of motion velocity V(x, t).The underlying invariance structure with respect to motion, is expressed by the local invariant interval ds 2 = g ij dx i dx j , of general relativity.The Einstein's program permits to conciliate gravitation and electromagnetism.Since they act as interactions of matter, both derive from variations of its energy E = mc 2 : electromagnetism from mass m variation, and gravitation from light velocity c variation.Electromagnetism alone, but not gravitation, derives from the frequency ν variation of the matter energy E = hν = mc 2 , leading to its quantification.
This would get an insight into theoretical difficulties encountered to incorporate gravitation in standard model of particles, and into experimental difficulties to detect the graviton as mediating quantum particle.
|
v3-fos-license
|
2021-10-15T15:58:21.489Z
|
2021-10-04T00:00:00.000
|
240425888
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-4292/13/19/3973/pdf?version=1633767770",
"pdf_hash": "37c85ce17bba9692562e1fa098bcc0abc56b3b42",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:847",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "8ada0da1ea20fce3082c4c3291e11415be4f0d5f",
"year": 2021
}
|
pes2o/s2orc
|
Galileo E5 AltBOC Signals: Application for Single-Frequency Total Electron Content Estimations
: Global navigation satellite system signals are known to be an efficient tool to monitor the Earth ionosphere. We suggest Galileo E5 AltBOC phase and pseudorange observables—a single-frequency combination—to estimate the ionospheric total electron content (TEC). We performed a one-month campaign in September 2020 to compare the noise level for different TEC estimations based on single-frequency and dual-frequency data. Unlike GPS, GLONASS, or Galileo E5a and E5b single-frequency TEC estimations (involving signals with binary and quadrature phase-shift keying, such as BPSK and QPSK, or binary offset carrier (BOC) modulation), an extra wideband Galileo E5 AltBOC signal provided the smallest noise level, comparable to that of dual-frequency GPS. For elevation higher than 60 degrees, the 100 s root-mean-square (RMS) of TEC, an estimated TEC noise proxy, was as follows for different signals: ~0.05 TECU for Galileo E5 AltBOC, 0.09 TECU for GPS L5, ~0.1TECU for Galileo E5a/E5b BPSK, and 0.85 TECU for Galileo E1 CBOC. Dual-frequency phase combinations provided RMS values of 0.03 TECU for Galileo E1/E5, 0.03 and 0.07 TECU for GPS L1/L2 and L1/L5. At low elevations, E5 AltBOC provided at least twice less single-frequency TEC noise as compared with data obtained from E5a or E5b. The short dataset of our study could limit the obtained estimates; however, we expect that the AltBOC single-frequency TEC will still surpass the BPSK analogue in noise parameters when the solar cycle evolves and geomagnetic activity increases. Therefore, AltBOC signals could advance geoscience.
Introduction
Many scientific problems and practical applications (involving transionospheric propagation) require reliable monitoring of the ionospheric variability at different spatial and temporal scales. For some applications, engineers need 3D electron density distribution, but often they need only total electron content (TEC)-an integral parameter.
To estimate TEC, scientists have suggested radio beacons which provide data on the Faraday rotation of the signal polarization plane [1] or the signal phase and pseudorange (group delay) [2,3]. The first approach requires geomagnetic field data along the line of sight and linearly polarized signals. This makes the second approach more usable for low Earth orbit (LEO) [4], medium Earth orbit (MEO) [5], and geostationary Earth orbit (GEO) [6][7][8] satellite data. Global navigation satellite systems (GNSS)-such as GPS, GLONASS, Galileo and BeiDou-include MEO and GEO (BeiDou) satellites which provide global coverage of stable signals at multiple coherent operating frequencies. Thus, global navigation satellite system signals have become an efficient tool to monitor the Earth ionosphere. GNSS TEC provides a basis for different techniques: GNSS radio tomography [9][10][11], GNSS radio interferometry of travelling ionospheric disturbances (TID) [12], ionosphere mapping [13,14], absolute TEC estimation [15], and ionospheric perturbation indices estimation [16][17][18]. Scientists use these techniques and data to study space weather, to create empirical or first principal ionospheric models [19,20], to estimate the quality of different models [21,22], and to update ionospheric models [23].
Most of the above studies involve dual-frequency phase and pseudorange observations. The dual-frequency approach exploits frequency dependence of the ionospheric delay. The single-frequency approach exploits opposite dependence of ionosphere effects on phase and pseudorange observations. For binary-phase shift keying (BPSK) and binary offset carrier (BOC) [24] modulation, noises of pseudorange observations exceed those of phase observations. This results in high amounts of noise in dual-frequency pseudorange TEC or single-frequency TEC. The high noise limits the applications of single-frequency TEC, though exceptions are some data from geostationary satellites [7] and from low-end GNSS receivers in legacy smartphones [25].
Satellites' clock stability and the coherency of the two operating frequencies affect the dual-frequency TEC estimates. Thus, EGNOS dual-frequency phase TEC noises exceed those in GPS/GLONASS single-frequency TEC [6]. However, advances in GNSS signals have allowed the signal to noise ratio (SNR) to be increased and the noise in observables (and subsequently in TEC) to be decreased, mostly by implementing advanced signal coding. Among those advanced signal coding methods is AltBOC (alternative BOC), described in detail in [26]. The AltBOC signal features an extra wideband within twice a bandwidth of QPSK signals and provides a very steep autocorrelation function.
Following [27], we considered the properties of an AltBOC signal, which can affect noises in TEC estimates. The upper panel in Figure 1 shows autocorrelation functions of BPSK and AltBOC (15,10) signals. Integers (m,n) and (n) in brackets stand for multipliers for subcarrier frequency f s = m × f 0 and chip rate frequency f chip = n × f 0 , with f 0 = 1.023 MHz, a value typical for GNSS; T chip is chip length. The main correlation peak of the AltBOC signal is steeper than the peak of the BPSK signal.
Because code tracking noise is inversely proportional to the steepness of the autocorrelation function, we expect a decrease in pseudorange noise for the AltBOC signal compared to the BPSK signal. The middle panel in Figure 1 shows pseudorange noise (σ code ) vs. the carrier-to-noise ratio (C/N0) for AltBOC (15,10), BPSK(10) and BPSK(1) signals. For calculations we used the following parameters: a delay-locked loop filter bandwidth of 1Hz, delay-locked loop correlator spacing of 1/12 chips for BPSK(1) and 1/5 chips for BPSK (10) and AltBOC (15,10), and a correlation time of 20 ms for BPSK(1) and 100 ms for BPSK (10) and AltBOC (15,10).
The AltBOC signal outperforms both BPSK(1) and BPSK(10), with a noise below 5 cm down to a C/N0 of 35dB-Hz. The bottom panel in Figure 1 describes signals' resistance against multipath (code multipath envelopes) for BPSK and AltBOC signals assuming an early-late power discriminator with the above-mentioned spacing and one reflected ray with a signal over multipath ratio SMR = 2.
The AltBOC modulation provides much higher multipath resistance compared to the BPSK(1) modulation. AltBOC signals surpass BPSK(10) signals for long delays to mitigate multipath, while the signals have comparable characteristics for short multipath delays. Such an improvement in both pseudorange noise and multipath resistance should improve noise in TEC estimates if they rely on pseudorange observables.
Recently, Galileo started to exploit extra wideband E5 AltBOC signals [26] available with modern geodetic receivers. The current article studies the potential of Galileo E5 AltBOC signals for TEC estimates. For that, we compare TEC noises when different observables are used, and analyze the rate of the TEC index deduced from AltBOC signals.
Galileo E5 AltBOC Signal
The Galileo satellites transmit E5 signals in the [1164 MHz -1215 MHz] band, which is the largest radionavigation satellite system (RNSS) band. It is also a highly protected aeronautical radio navigation services (ARNS) radio band, but it is not exclusive to RNSS. That means that Galileo E5 signals share this band with other GNSS signals, as well as with non-RNSS services. In particular, GPS L5 and L2C, QZSS L5S and L2, SBAS L5, IRNSS L5, BeiDou B2a/B2b, as well as future GLONASS L3 all fall within this band. Figure 2 compares the spectrum of the Galileo E5 signal with the spectra of GPS L5 and L2C. These spectra were obtained from the MSU test site equipped with a JAVAD Delta3 receiver according to the procedure described in [28]. (Section 4 provides information about the MSU test site.) The Galileo E5a signal overlaps with the GPS L5 signal, which has similar signal characteristics (see Figure 2), as well as with the BeiDou B2a signal. The Galileo E5b signal does not interfere with any of the GPS signals, but it has the same frequency and modulation as the BeiDou B2b signal and is very close to the future
Galileo E5 AltBOC Signal
The Galileo satellites transmit E5 signals in the [1164-1215 MHz] band, which is the largest radionavigation satellite system (RNSS) band. It is also a highly protected aeronautical radio navigation services (ARNS) radio band, but it is not exclusive to RNSS. That means that Galileo E5 signals share this band with other GNSS signals, as well as with non-RNSS services. In particular, GPS L5 and L2C, QZSS L5S and L2, SBAS L5, IRNSS L5, BeiDou B2a/B2b, as well as future GLONASS L3 all fall within this band. Figure 2 compares the spectrum of the Galileo E5 signal with the spectra of GPS L5 and L2C. These spectra were obtained from the MSU test site equipped with a JAVAD Delta3 receiver according to the procedure described in [28]. (Section 4 provides information about the MSU test site.) The Galileo E5a signal overlaps with the GPS L5 signal, which has similar signal characteristics (see Figure 2), as well as with the BeiDou B2a signal. The Galileo E5b signal does not interfere with any of the GPS signals, but it has the same frequency and modulation as the BeiDou B2b signal and is very close to the future GLONASS L3 Remote Sens. 2021, 13, 3973 4 of 14 signal. Figure 2 shows that the spectrum of the E5 AltBOC signal has twice a bandwidth compared to those of GPS L5 and L2C. That fact, together with the steeper autocorrelation function of the AltBOC modulated signal [29], should lead to significant improvements in positioning and multipath mitigation, and, thus, to a decrease in noise for single-frequency TEC estimation. autocorrelation function of the AltBOC modulated signal [29], should lead to significant improvements in positioning and multipath mitigation, and, thus, to a decrease in noise for single-frequency TEC estimation. Table 1 presents the details on the Galileo E5 signal [30]. The signal consists of 2 components -E5a and E5b, which are centered on 1176.45 MHz and 1207.14 MHz, correspondingly. The E5b and E5a signals are QPSK-modulated with 10230-chip long codes of 10.23 MHz. Both components include the data (nav) channel (I-in-phase) and the pilot channel (Q-quadra-phase) with equal powers. A receiver could treat both channels (data and pilot) as two independent BPSK-modulated signals.
Being in adjacent bands, E5a and E5b signals are transmitted coherently using Alt-BOC(15,10) 8-PSK modulation [26] with the same filter and high-power amplifier (HPA) operating at saturation for higher efficiency. The whole Galileo E5 signal is thus an extra wideband signal (see Figure 2) that can be received either as a whole or separately. When processing E5a and E5b signals simultaneously, the whole E5 band (51.15MHz minimum bandwidth) should be downconverted through the same RF/IF chain. Extra wideband requires a rather high sampling rate, which GNSS receivers have not provided until recently, because it was hard to implement in hardware. However, such extra wideband receivers benefit from pseudorange measurements, which are the most resistant Table 1 presents the details on the Galileo E5 signal [30]. The signal consists of 2 components-E5a and E5b, which are centered on 1176.45 MHz and 1207.14 MHz, correspondingly. The E5b and E5a signals are QPSK-modulated with 10230-chip long codes of 10.23 MHz. Both components include the data (nav) channel (I-in-phase) and the pilot channel (Q-quadra-phase) with equal powers. A receiver could treat both channels (data and pilot) as two independent BPSK-modulated signals. Being in adjacent bands, E5a and E5b signals are transmitted coherently using Alt-BOC(15,10) 8-PSK modulation [26] with the same filter and high-power amplifier (HPA) operating at saturation for higher efficiency. The whole Galileo E5 signal is thus an extra wideband signal (see Figure 2) that can be received either as a whole or separately.
When processing E5a and E5b signals simultaneously, the whole E5 band (51.15 MHz minimum bandwidth) should be downconverted through the same RF/IF chain. Extra wideband requires a rather high sampling rate, which GNSS receivers have not provided until recently, because it was hard to implement in hardware. However, such extra wideband receivers benefit from pseudorange measurements, which are the most resistant GNSS signals toward thermal noise, multipath and narrow-band interference [27,29,31]. In turn, these measurements should provide a low noise level for single-frequency TEC [31,32].
Processing E5a and E5b signals separately as two independent QPSK-coded signals does not require an extra-wide bandwidth receiver, thus reducing its complexity. This is exactly the way the majority of current-generation professional GNSS receivers operate. In this case, low-noise TEC estimates can be obtained only with dual-frequency E5a/E5b phase measurements, while range measurements contain significant noise due to narrower bandwidth and less sophisticated coding.
Note also that the minimal receiving power of both Galileo E5a and E5b signals exceeds those of the Galileo E1 signal by 2 dB [30]. Therefore, using E5a/E5b, we could assume a better performance (as compared with Galileo E1) in case of signal obstruction.
Ionospheric TEC Estimation with GNSS Signals
As mentioned above, TEC can be estimated using either dual-or single-frequency pseudorange and carrier phase measurements. In the first case, the linear combinations of phase (L i and L j ) or pseudorange (P i and P j ) measurements at two frequencies f i and f j give the slant TEC estimate along the receiver-satellite line of sight via the following well-known relations [33]: where K = 40.308 m 3 /s 2 , c is the speed of light in a vacuum, const represents undefined carrier-phase ambiguities, and DCB stands for the sum of differential code biases in satellites transmitting and receivers receiving chains. For Galileo, (L i , L j ) and (P i , P j ) correspond to phase and code measurements for any pair of signals. Combination (2) proved to be very noisy compared to (1) when applied to BPSK-and BOC-coded signals. We used dual-frequency combination (1) as a reference in the comparative TEC noise analysis. A single-frequency pseudorange / carrier phase combination for slant TEC estimation can be also constructed by exploiting the fact that ionospheric contribution enters phase and group refractivity index with the opposite sign [34]: where the same notations apply and const once again stands for unknown carrier-phase ambiguities. When one uses combination (3), a significant noise appears when applied to BPSK-and BOC-coded signals due to pseudorange measurements. Moreover, like combination (1), it provides only relative estimates of slant TEC due to an unknown initial phase. This is the reason that single-frequency combination (3) is not widely applied in ionospheric studies. New extra wideband GNSS signals (i.e. Galileo E5 AltBOC) could resolve some issues arising with single-frequency combinations, especially the TEC noise problem: the wider spectral occupancy and steeper main peak of autocorrelation function of such signals results in lower noise and higher multipath robustness. Below, we provide the comparison of noise characteristics for slant TEC estimated via (3) with BPSK-, BOCand AltBOC-coded signals, assuming dual-frequency combination (1) as a reference. We corrected raw data to mitigate cycle slip effects. If two consecutive TEC values exceed (in absolute value) the previous values by more than 4 TECU, cycle slips occur. The TEC jump provides a correction constant for the TEC values after the slip.
Following [6], we used the TEC root-mean-square within 100 s (or 100 s TEC RMS) as a proxy for TEC noise throughout this work: The 100 s interval was selected for two reasons: on the one hand, it is reasonably long enough to provide a statistically significant amount of TEC data, and on the other hand, it is reasonably short enough to limit the influence of the ionospheric variability (which usually has larger timescales) on the obtained results.
Note that assuming DCBs are known/calibrated carrier leveling or code smoothing procedures can be applied to (1) and (2), providing absolute values of TEC, while due to unknown phase ambiguities, single-frequency combination (3) seems to be suitable for monitoring TEC changes rather than absolute TEC values. Nevertheless, the approach for resolving unknown constants in (3), which is quite similar to DCB estimation [34], could be adopted, making single-frequency relative TEC estimates quite useful for applications that require absolute TEC data.
Experimental Setup
Currently, there are 24 Galileo satellites continuously transmitting wideband E5 Al-tBOC signals, which can be used to estimate the ionospheric TEC via single-frequency combination (3). The number of receiving sites capable of working with that type of signal is also increasing. To analyze the noise level of TEC estimation with E5 AltBOC signals, we performed a one-month campaign in September 2020. The test receiver MSU was located at the roof of the Faculty of Physics, Lomonosov Moscow State University, Russia. Table 2 shows the coordinates and technical characteristics of the receiver. We performed our campaign in the early ascending phase in solar cycle 25; the monthly average F10.7 was 71 s.f.u. The studied equinox period covers mostly undisturbed conditions, except 2 geomagnetic storms (Kp indices reached 5 0 on September 28, and 5 + on September 27). However, we do not expect significant effects on GNSS signals at mid-latitudes for these minor storms.
Experimental facilities included two receivers (Sigma and Delta3), but only one (Delta3) treated extra wideband E5 signals, while the other (Sigma) processed E5a/E5b signals separately. In the current study, we show only the Delta3 receiver data. However, both receivers used the same RingAnt antenna connected by a 20 m RG-8x cable through the Mini-Circuits ZB4PD1-2000-s splitter (6 dB loss). To adjust the input signal (which is split between two receivers), the chain included a 20 dB low-noise amplifier (LNA) with a bandpass of 1.1-1.65 GHz (see Figure 3). The amplifier resulted in comparatively high values of SNR for all GNSS signals observed at this site.
Experimental Results
This section considers an example of TEC data on 2 September 2020, statistical analysis of TEC observations based on different observables, and finally the application of single-frequency Galileo TEC for ROTI (rate of TEC index) calculations.
Galileo TEC Observations: Case Study on 2 September 2020
The top panel in Figure 4 shows an example of a signal strength observable (SNR) from the Galileo E11 satellite on 2 September 2020. The dynamics in SNR correspond to the dynamics in elevation, including low signal strength when the satellite rises above the horizon, maximal SNR at maximal elevation and then minimal SNR in the end of the pass. Sharp decreases in SNR correspond to multipath effects. The Galileo E5 signal strength (S8-see Table 1 for RINEX notations of the observables) exceeds the signal strength of the other signals, while the E1 signal features the smallest signal strength (S1). The E5b signal strength (S7) exceeds the E5a signal strength (S5).
The relative slant TEC (the middle panel in Figure 4) shows typical dynamics as well as corresponding dynamics with elevation, including higher TEC values at low elevations and minimal TEC values at high elevations. Decreases in SNR result in sharp variations in TEC data. It is evident that the single-frequency L1C1 TEC (from here on L stands for phase measurements, while C stands for code pseudorange measurements) combination provides the most noisy data, while single-frequency L8C8 and dual-frequency L1L5 combinations provide less noisy data. Note also that the receiver is located in the Moscow urban area; 200 m to the north there is the 240-m-high main building of Lomonosov Moscow State University dominating the landsight from the receiving site. That may lead to additional errors due to multipath effects. To mitigate them, we excluded corresponding azimuthal directions from the analysis.
Experimental Results
This section considers an example of TEC data on 2 September 2020, statistical analysis of TEC observations based on different observables, and finally the application of singlefrequency Galileo TEC for ROTI (rate of TEC index) calculations.
Galileo TEC Observations: Case Study on 2 September 2020
The top panel in Figure 4 shows an example of a signal strength observable (SNR) from the Galileo E11 satellite on 2 September 2020. The dynamics in SNR correspond to the dynamics in elevation, including low signal strength when the satellite rises above the horizon, maximal SNR at maximal elevation and then minimal SNR in the end of the pass. Sharp decreases in SNR correspond to multipath effects. The Galileo E5 signal strength (S8-see Table 1 for RINEX notations of the observables) exceeds the signal strength of the other signals, while the E1 signal features the smallest signal strength (S1). The E5b signal strength (S7) exceeds the E5a signal strength (S5).
The relative slant TEC (the middle panel in Figure 4) shows typical dynamics as well as corresponding dynamics with elevation, including higher TEC values at low elevations and minimal TEC values at high elevations. Decreases in SNR result in sharp variations in TEC data. It is evident that the single-frequency L1C1 TEC (from here on L stands for phase measurements, while C stands for code pseudorange measurements) combination provides the most noisy data, while single-frequency L8C8 and dual-frequency L1L5 combinations provide less noisy data.
The bottom panel in Figure 4 provides TEC noise estimates based on 100 s TEC RMS (RMSTEC 100S ). The noise of the L1L5 TEC combination varies from 0.01 to 0.1 TECU, depending on the elevation and SNR. The noise of the AltBOC single-frequency combination (L8C8) exceeds several times the noise of the L1L5 TEC combination, but at elevations higher than 20 degrees, it does not exceed 0.1 TECU. Other combinations demonstrate higher noises, even up to 1 TECU (L1C1).
Galileo TEC: Statistical Analysis
One single pass gives only a clue regarding the situation but provides unreliable evidence of AltBOC performance. Thus, we statistically analyzed the whole available one-month dataset, involving all time intervals and all Galileo satellites. The data were split into 3 sets, corresponding to low elevations (0-30 • ), medium elevations (30-60 • ), and high elevations (60-90 • ). For each set, we considered probability density functions of TEC RMS for two single-frequency (L8C8 and L5C5) combinations and one dual-frequency (L1L5) combination (see Figure 5). into 3 sets, corresponding to low elevations (0-30°), medium elevations (30-60°), and high elevations (60-90°). For each set, we considered probability density functions of TEC RMS for two single-frequency (L8C8 and L5C5) combinations and one dual-frequency (L1L5) combination (see Figure 5).
Both ionospheric irregularities and observables' noises contribute to 100-sec TEC RMS. The gap between the histogram and the zero value show the base noise level. L1L5 data features the smallest gap; calm conditions (when no ionospheric irregularities appear) correspond to zero TEC RMS values. Conversely, L8C8 and L5C5 data show no small 100-sec TEC RMS values due to higher noises.
The higher the elevation, more narrow the distributions and the closer it becomes to the zero value. We expected this, because higher phase and pseudorange noise at low elevations should produce higher amounts of TEC noise. Even at 60-90° elevations, noise shifts the L8C8 TEC RMS distribution to higher values. However, the L8C8 TEC RMS distribution is shifted much less than the L5C5 distribution, especially at low elevations. Table 3 summarizes the average TEC RMS for the whole observational dataset. Note that an increase in elevation decreases the TEC RMS by 7-10 times for combinations involving the AltBOC signal. Both ionospheric irregularities and observables' noises contribute to 100 s TEC RMS. The gap between the histogram and the zero value show the base noise level. L1L5 data features the smallest gap; calm conditions (when no ionospheric irregularities appear) correspond to zero TEC RMS values. Conversely, L8C8 and L5C5 data show no small 100 s TEC RMS values due to higher noises.
The higher the elevation, more narrow the distributions and the closer it becomes to the zero value. We expected this, because higher phase and pseudorange noise at low elevations should produce higher amounts of TEC noise. Even at 60-90 • elevations, noise shifts the L8C8 TEC RMS distribution to higher values. However, the L8C8 TEC RMS distribution is shifted much less than the L5C5 distribution, especially at low elevations. Table 3 summarizes the average TEC RMS for the whole observational dataset. Note that an increase in elevation decreases the TEC RMS by 7-10 times for combinations involving the AltBOC signal. The Galileo E5 AltBOC signal provides the smallest noise of single-frequency TEC (L8C8), which is comparable to that of the L1L5 dual-frequency combination. At low elevations, L1L5 and L8C8 TEC RMS are almost the same; at higher elevations, L8C8 TEC RMS exceeds L1L5 ones by 1.5 times. Other combinations feature~1.5-20 times higher values of TEC noise against Galileo L8C8. The worst results correspond to the single-frequency L1C1 combination.
At low elevations, E5 AltBOC provides at least twice less noise as compared with E5a/E5b combinations (both single-and dual-frequency). The E5a/E5b dual-frequency combination provides almost no advantage over the single-frequency E5 AltBOC combination, probably due to the closeness of their frequencies. Nevertheless, the noise in single-frequency E5a or E5b combinations exceeds those of the dual-frequency E5a/E5b phase combination by~1.5 times.
We also compared obtained Galileo TEC noise with those from GPS. Table 4 provides results on 100 s RMS for L1L5, L5C5 and L1L2 GPS TEC. Except at low elevations, the Galileo E5 AltBOC single-frequency combination features less noise as compared with the GPS L1L2 and L1L5 (and, of course, the GPS L5C5) combinations. At high elevations, the 100 s single-frequency TEC root-mean-squares were (from low to high values):~0.05 TECU for Galileo E5 AltBOC, 0.09 TECU for GPS L5, 0.1TECU for Galileo E5a/E5b BPSK, and 0.85 TECU for Galileo E1 CBOC. At the same elevations, dual-frequency combinations provided: 0.03 TECU for Galileo E1/E5 TEC, 0.03 TECU and 0.07 TECU for GPS L1L2 and L1L5. Therefore, the Galileo E5 AltBOC signal provided the smallest amount of noise for TEC among single-frequency combinations, which is comparable to that of the dual-frequency TEC of both GPS and Galileo.
Galileo Single-Frequency Data for ROTI Calculations
Many scientists use the ROTI index [16] to study small-scale ionospheric irregularities [35]. A higher noise level in the single-frequency TEC data (against dual-frequency data) results in higher ROTI values. However, we expect that such data could also be useful to estimate the effects of ionospheric irregularities. To estimate this, we analyzed ROTI quality from Galileo AltBOC data (L8C8) against ROTI quality from dual-frequency data (we chose L1L5 as a reference). Figure 6 shows how the ROTI values from single-frequency Galileo AltBOC data (L8C8) corresponds to those from L1L5. We compare the data for the same satellite-receiver set, but for different observables. L1L5 provides a reference to find a discrepancy for AltBOC single-frequency data.
We expect a noise-multiplying effect for ROTI. Figure 6a provides the ROTI L8C8 -to-ROTI L1L5 ratio. ROTI L8C8 values 3 times (maximum of the distribution) exceed ROTI L1L5 . Outliers in ROTI estimations provide an increase in distribution at 20 (we chose this value as a limit). ROTI L1L5 values could exceed ROTI L8C8 values due to L1/L5 noises.
Scattering diagrams (Figure 6b) also show (for low ROTI) a positive correlation between ROTI L8C8 and ROTI L1L5 , but the coefficient between them differs from 1. This positive correlation gives hope that scientists could use the single-frequency AltBOC ROTI as an additional indicator for the ionosphere state.
Outliers in ROTI estimations provide an increase in distribution at 20 (we chose this value as a limit). ROTIL1L5 values could exceed ROTIL8C8 values due to L1/L5 noises.
Scattering diagrams (Figure 6b) also show (for low ROTI) a positive correlation between ROTIL8C8 and ROTIL1L5, but the coefficient between them differs from 1. This positive correlation gives hope that scientists could use the single-frequency AltBOC ROTI as an additional indicator for the ionosphere state.
Discussion and Conclusions
Different factors affect TEC measurements: the intrinsic thermal noise of the receiver, the stability of the disciplined oscillator, the coherence of the operating frequencies, multipath, etc. [36]. Increase in the satellite transmitter power [37], application of choke ring antennas, or advanced signal coding (providing a steeper and narrower main maximum of the autocorrelation function) could, to some extent, compensate for such negative factors.
Our results (involving Galileo as an example) show a one-order decrease in singlefrequency TEC noises when a system uses AltBOC signals instead of BPSK signals. The estimated TEC noise proxies (for elevation higher 60 deg.)-100-sec root-mean-square (RMS) of TEC-were: ~ 0.05 TECU for Galileo E5 AltBOC, 0.09 TECU for GPS L5, ~0.1TECU for Galileo E5a/E5b BPSK, and 0.85 TECU for Galileo E1 CBOC. Dual-frequency combinations provide RMS values of 0.03 TECU for Galileo E1E5 and 0.03/0.07 TECU for GPS L1L2/L1L5. At low elevations, E5 AltBOC provides at least twice less single-frequency TEC noise as compared with the data obtained from E5a or E5b.
The obtained results indicate that AltBOC signals level down the noise in TEC from a single-frequency phase-code combination to the noise in TEC from the reference phase dual-frequency signals encoded in BPSK.
Note that our comparison used data obtained simultaneously on the same receiver and antenna, which guarantees the same level of thermal noise. We installed a choke ring antenna, which itself suppresses multipath effects. We could expect that the single-frequency AltBOC TEC has even more of an advantage over single-frequency BPSK TEC when standard antennas are used, especially at low elevations. However, this requires a separate study.
The short dataset during mostly undisturbed geomagnetic conditions could limit obtained estimates. Mid-latitude observations also could limit our study, since no intense small-scale ionospheric irregularities (which appear at mid-latitudes usually during strong or severe storms [38]) affected the GNSS signals. It would be important for future studies to verify our results in the presence of small-scale irregularities at high-, mid-and low-latitudes under intense geomagnetic storms and plasma bubble conditions. We expect that, qualitatively, results will hold as the solar cycle evolves and geomagnetic
Discussion and Conclusions
Different factors affect TEC measurements: the intrinsic thermal noise of the receiver, the stability of the disciplined oscillator, the coherence of the operating frequencies, multipath, etc. [36]. Increase in the satellite transmitter power [37], application of choke ring antennas, or advanced signal coding (providing a steeper and narrower main maximum of the autocorrelation function) could, to some extent, compensate for such negative factors.
Our results (involving Galileo as an example) show a one-order decrease in singlefrequency TEC noises when a system uses AltBOC signals instead of BPSK signals. The estimated TEC noise proxies (for elevation higher 60 deg.)-100 s root-mean-square (RMS) of TEC-were:~0.05 TECU for Galileo E5 AltBOC, 0.09 TECU for GPS L5,~0.1TECU for Galileo E5a/E5b BPSK, and 0.85 TECU for Galileo E1 CBOC. Dual-frequency combinations provide RMS values of 0.03 TECU for Galileo E1E5 and 0.03/0.07 TECU for GPS L1L2/L1L5. At low elevations, E5 AltBOC provides at least twice less single-frequency TEC noise as compared with the data obtained from E5a or E5b.
The obtained results indicate that AltBOC signals level down the noise in TEC from a single-frequency phase-code combination to the noise in TEC from the reference phase dual-frequency signals encoded in BPSK.
Note that our comparison used data obtained simultaneously on the same receiver and antenna, which guarantees the same level of thermal noise. We installed a choke ring antenna, which itself suppresses multipath effects. We could expect that the singlefrequency AltBOC TEC has even more of an advantage over single-frequency BPSK TEC when standard antennas are used, especially at low elevations. However, this requires a separate study.
The short dataset during mostly undisturbed geomagnetic conditions could limit obtained estimates. Mid-latitude observations also could limit our study, since no intense small-scale ionospheric irregularities (which appear at mid-latitudes usually during strong or severe storms [38]) affected the GNSS signals. It would be important for future studies to verify our results in the presence of small-scale irregularities at high-, mid-and lowlatitudes under intense geomagnetic storms and plasma bubble conditions. We expect that, qualitatively, results will hold as the solar cycle evolves and geomagnetic activity increases, such that the AltBOC single-frequency TEC will still surpass BPSK analogue in noise parameters.
The obtained TEC noise estimates contain contributions from the intrinsic noise specific to each individual receiver. Therefore, we cannot consider the obtained estimates as universal estimates. However, we suppose similar features for other receivers: a fold reduction in TEC noise when using AltBOC signals could be expected, but the TEC noise level should agree with a receiver's intrinsic and multipath noise. To verify, one can use the approach by Demyanov et al. [36].
We also analyzed the ROTI index based on single-frequency AltBOC TEC. The positive correlation observed with dual-frequency data gives hope that scientists could use the single-frequency AltBOC ROTI as an additional indicator for the state of the ionosphere.
Note that assuming DCBs are known/calibrated dual-frequency combinations together provide absolute values of TEC, while single-frequency combinations seem to be suitable for monitoring TEC changes rather than absolute TEC values. Nevertheless, the approach for resolving unknown constants through LxCx combinations, which is quite similar to DCB estimation [34], could be adopted, making this data quite useful for problems that require absolute TEC.
We expect similar features (a fold decrease in the single-frequency TEC noise) for BeiDou B2 AltBOC signals. Unfortunately, Galileo and BeiDou use AltBOC coding only on one operating frequency. We could also expect a decrease in dual-frequency TEC noise when both frequencies use AltBOC coding. That would make it possible to record TEC disturbances of low amplitude from such important events as, for example, the effects of artificial high-frequency heating [39], the response of the ionosphere to C-class solar flares [40] and low-magnitude earthquakes [41], which can produce effects at levels typical of the current GNSS TEC estimates. Therefore, AltBOC signals could advance geoscience.
|
v3-fos-license
|
2023-08-23T14:03:59.354Z
|
2023-08-23T00:00:00.000
|
261067558
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmchealthservres.biomedcentral.com/counter/pdf/10.1186/s12913-023-09917-3",
"pdf_hash": "4641c66d088ad813e511c69c425471cec96bcde8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:848",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"sha1": "6cbfc715b80454fee53701cd28b3846e55ca0c2b",
"year": 2023
}
|
pes2o/s2orc
|
Successes and challenges towards improving quality of primary health care services: a scoping review
Background Quality health services build communities’ and patients’ trust in health care. It enhances the acceptability of services and increases health service coverage. Quality primary health care is imperative for universal health coverage through expanding health institutions and increasing skilled health professionals to deliver services near to people. Evidence on the quality of health system inputs, interactions between health personnel and clients, and outcomes of health care interventions is necessary. This review summarised indicators, successes, and challenges of the quality of primary health care services. Methods We used the preferred reporting items for systematic reviews and meta-analysis extensions for scoping reviews to guide the article selection process. A systematic search of literature from PubMed, Web of Science, Excerpta Medica dataBASE (EMBASE), Scopus, and Google Scholar was conducted on August 23, 2022, but the preliminary search was begun on July 5, 2022. The Donabedian’s quality of care framework, consisting of structure, process and outcomes, was used to operationalise and synthesise the findings on the quality of primary health care. Results Human resources for health, law and policy, infrastructure and facilities, and resources were the common structure indicators. Diagnosis (health assessment and/or laboratory tests) and management (health information, education, and treatment) procedures were the process indicators. Clinical outcomes (cure, mortality, treatment completion), behaviour change, and satisfaction were the common indicators of outcome. Lower cause-specific mortality and a lower rate of hospitalisation in high-income countries were successes, while high mortality due to tuberculosis and the geographical disparity in quality care were challenges in developing countries. There also exist challenges in developed countries (e.g., poor quality mental health care due to a high admission rate). Shortage of health workers was a challenge both in developed and developing countries. Conclusions Quality of care indicators varied according to the health care problems, which resulted in a disparity in the successes and challenges across countries around the world. Initiatives to improve the quality of primary health care services should ensure the availability of adequate health care providers, equipped health care facilities, appropriate financing mechanisms, enhance compliance with health policy and laws, as well as community and client participation. Additionally, each country should be proactive in monitoring and evaluation of performance indicators in each dimension (structure, process, and outcome) of quality of primary health care services. Supplementary Information The online version contains supplementary material available at 10.1186/s12913-023-09917-3.
Introduction
Quality of care is the extent to which the health care system can achieve the desired health care goals, such as effective recovery, preventing premature mortality, halting disease progression from being complicated, and maximising clients' satisfaction with the care they received [1].With efficient, integrated, equitable, timely, people-centred, and safe health services, preventive and promotive, treatment, palliative, and rehabilitative quality care could be achieved [2].These services are provided in primary health care (PHC) [3], for which quality is an attribute in the first-contact care of several health conditions [4].Because PHC is planned to deliver essential health services as close to home as possible, it serves as a roadmap to universal health care coverage (UHC), which must be of high quality to achieve the health system's vision.
Quality is currently on the agenda of sustainable development goals that target UHC [5].The World Health Organisation (WHO), the Organisation for Economic Co-operation and Development, and the World Bank emphasised that ensured quality is a fundamental component of UHC [6].To streamline policy and PHC quality implementation, a series of national strategic directions have been adopted [7].Notable quality and safety standards or strategies have been established in some countries, for example, Australia [8,9], European [10] and African countries [11].Good health governance and administration [12], quality improvement programmes [13], financial and non-financial support, community empowerment and engagement, competent health care providers, and monitoring and evaluation [14] are some of the quality improvement strategies.These schemes have a vital role in improving the patient experience in PHC, including quality of care, satisfaction, and the health of populations [15].
Despite these strategies, poor-quality care is a continuing public health debate.This could be explained by safety problems, a large percentage of hospital-acquired infections, a high burden of amenable mortality, and excess health care expenditure.Globally, the estimated annual cost due to medication errors is 42 billion United States dollars (US$) [16].Similarly, more than 10% of hospital expenditure in high-income countries is due to medical errors or hospital-acquired infections [17], where 1 in 10 patients experience medical errors while receiving hospital care, and 7 out of 100 hospitalised patients (1 in 10 in developing countries) acquire a health care-associated infections [17,18].This situation is recorded much more unacceptable, especially in less developed countries.A systematic analysis of preventable deaths in 137 low-and middle-income countries (LMICs) revealed that 5.0 million deaths are attributed to poor-quality care annually [19], which imposes costs of US$ 1.4 to 1.6 trillion each year in lost productivity [20].
The health system could prevent many deaths if highquality care were implemented.The Lancet Global Health Commission estimated that high-quality health systems could prevent 8 million deaths yearly in LMICs [5].This requires systematic and coherent evidence-based actions that give emphasis quality [21] that pragmatic framework can measure.
Donabedian's quality of care measurement model is considered a logical quality measurement framework to produce evidence on quality care based on the structure, process, and outcome dimensions [22].This framework indicates what systems, policies, and infrastructure should be in place to ensure the delivery of high-quality PHC services towards the most desired health care outcome.This helps to identify challenges that need improvement, including commenting on the presence of policy documents or workable guidelines and the interaction between clients and health care providers.Experts advise that it is crucial to measure quality of care with a focus on the interaction between structure, process, and outcome dimensions because outcome status reflects the structure and process indicators [23].The WHO's 'Network for Improving Quality of Care Programme' has identified four measures for improving quality of health care.These are patient outcome measures, patient process measures, facility input or structure-related measures, and programme performance measures [24].Identifying crucial quality indicators in health care provision is also suggested [25].
Previous reviews focused on either individual countries or specific diseases only.For example, a review on depression [26] and outpatient practise of primary care in the United States of America (USA) and the United Kingdom (UK) [27] did not address the successes and challenges in providing quality care in the PHC system.Another review focused on the quality indicators of PHC and also did not address the successes and challenges of quality of care [28].Therefore, scoping all available evidence, including original articles, reviews, professional discussions, or arguments, will provide information for researchers and highlight areas for policy and decision makers to take corrective action on the identified gaps.This scoping review summarised indicators, successes, and challenges in delivering quality PHC services.
Search strategy
This review is guided by the preferred reporting items for systematic reviews and the meta-analysis extension for scoping reviews (PRISMA-ScR) to adhere to procedural activities starting from search strategy to reporting findings [29].A systematic search of literature from databases was conducted between 05 July 2022 and 23 August 2022 with no date restriction to access articles from inception to the final search date.Then, the screening process proceeded after fully-exported all articles into EndNote x9 reference manager software.The databases we accessed to identify articles were PubMed, Web of Science, Excerpta Medica dataBASE (EMBASE), and Scopus.We also searched Google Scholar to find additional literature.We operationalised the concept of quality of care in this study using Donabedian's model [22].The Donabedian model addresses structure (availability of inputs and resources, appropriateness of facilities and administration), process (indicators streamlined from patient and health worker interaction), and outcome (interventions' health effects).Search terms were "primary health care", "primary healthcare", "primary care", "quality of care", quality, "quality care", "quality of health care", "quality of healthcare", Donabedian, "Donabedian's model", "Donabedian model", "Donabedian's structure process outcome", "Donabedian's structure-process-outcome", "Donabedian structure process outcome" and "structure process outcome".Different Boolean operators were used.These are: "AND" and "OR" to expand or narrow the search parameters, quotation marks ("") to get results with the exact phrases; and parentheses to group search terms.The search strategy fitted in PubMed was ((((("primary health care" [All Fields] OR "primary healthcare"[All Fields] OR "primary care"[All Fields]) AND "quality of care"[All Fields]) OR "quality"[All Fields] OR "quality care"[All Fields] OR "quality of health care"[All Fields] OR "quality of healthcare"[All Fields]) AND "Donabedian"[All Fields]) OR "Donabedian's model"[All Fields] OR "Donabedian's structure process outcome"[All Fields] OR "Donabedian model"[All Fields]) OR "Donabedian structure process outcome"[All Fields] OR "Donabedian's structure-process-outcome"[All Fields] OR "Donabedian structure-process-outcome"[All Fields] OR "structure-process-outcome"[All Fields].The search strategy for Scopus, Web of Science and EMBASE is available in the supplementary file 1.
Selection criteria and data extraction
Searches were limited to articles published in English.We used 'population' , 'concept' and 'context' frameworks to establish a search strategy and include articles [30].The population was any participants, PHC personnel (general practitioners, nurses, pharmacies, midwives, dentists, etc.), or clients who participated in the study.The 'concept' was the quality of PHC, which approached Donabedian's structure-process-outcome model.The 'context' was any study setting, including urban or rural institutions (district hospitals, health centres), community care, nursing homes, family care, or if articles mentioned PHC settings in any country.When articles did not mention PHC, we reviewed keywords, and included the article if it fulfilled other criteria.The search was tailored to any document type, such as an article, review, perspective, opinion, letter, commentator, etc.However, we only found opinions, professional discussion, reviews, and articles.Previous reviews have reported the synthesis from different original studies, which may not be necessarily conducted by the Donabedian inputprocess-output framework, but the reviews should summarise the findings into this framework context to be included in the current review.The reference lists of previous reviews were assessed to check whether original studies included in the review were conducted based on Donabedian framework.Primary studies included in the review articles were in different contexts, dimensions, types of cases, functions, and domains except one review for from 2005 [31], which is included in another from 2010 [32].Therefore, we could not directly include the primary studies that were included in the former reviews except these two reviews 2005 and 2010 [31,32].We decided to include both reviews because only part of information from the 2005's review [31] included in the 2010 [32].Additionally, one of the purposes of a scoping review is to include any type of article, including previous reviews, to map the available literature besides summarising results [33].Therefore, the steps before data extraction were article search, exporting all accessed articles into EndNote x9 reference manager, duplication check, screening articles for title, screening articles for abstract, and full-text assessment.Author, publication year, country discussed, type of study or study design, PHC setting, study participant, and main findings of included documents were extracted.
Data synthesis
The main findings for structure, process, and outcome dimensions were synthesised using a narrative approach.Success was defined as high-quality care or improved quality of care.Any observed gap in the quality of PHC or barriers that affected the provision of quality of PHC were narrated as challenges.The search and characteristics of results, PHC quality indicators, successes, and challenges of quality in PHC were described sequentially in the result section.Summary of professional discussion: neither success nor challenges were described in the PHC quality indicators section of the result.
Search results
A total of 1,055 documents were available.These articles were accessed using the final search strategy of Web of Science (84 articles), Scopus (66 articles), and PubMed (722 articles), as well as searching of articles by topic in Google Scholar (105).The final articles (1,055) were exported in EndNote X9 and checked for duplication.After we removed duplication (272 were excluded), 783 were eligible for title screening.A total of 528 were excluded by title screening.Then, 255 were eligible for abstract screening, and 196 were excluded due to the abstract not having information related to the objectives.Then, 59 articles were eligible for full text screening, and 37 were excluded.Finally, 22 were eligible for the current result synthesis (Fig. 1).
PHC quality indicators
Several indicators were identified in the structure, process, and outcome dimensions of PHC quality.
Byrne and Tickle argue in their opinion article that six domains of health care quality-safety, effectiveness, timeliness, patient-centredness, efficiency, and equitability-have to be measured for structure, process, and outcome to assess the quality of primary dental care [25].Gardner and Mazza, who explored implementing of the quality framework in general practise settings in New Zealand, the UK, Germany, and Australia, concluded that the application of the Donabedian framework varies across countries [23].An umbrella review identified 727 PHC quality indicators: 74.5% were process indicators, 19.2% were outcome indicators, and the remainder (6.3%) were structure indicators, and these indicators were related to safety, effectiveness, timeliness, patientcentredness, efficiency, and equitability [28].
Other reviews identified quality indicators, which were 134 on geriatric pharmacotherapy [35], 53 on depression [26], 21 on early abortion care [47], and 20 on osteoarthritis [48].The types or numbers of indicators depend on the nature of the disease.For example, 80% and 38% of indicators were related to treatment safety and causes of drug selection in geriatric pharmacotherapy, respectively [35], and the majority (82%) of quality indicators were process indicators in this therapy [35].There was no structured indicator for the quality measurement of geriatric pharmacotherapy delivered by community pharmacists [35].From 53 quality indicators, 16 structure, 33 process, and 4 outcome indicators were identified in depression care; a "do not do" process indicator for some selected antidepressant drugs was identified [26].As an additional example, the 20 quality indicators (2 structure, 16 process, and 2 outcome domains) in osteoarthritis care are further grouped into two structures, nine processes, and two outcome indicators [48].According to the home health care professional's perspective, home pharmaceutical care were established with 9 themes and 27 subthemes [36].One study discussed the Donabedian care model as a mediation pathway; structure indicators can directly affect outcome indicators [38].
In few studies, some process determinants were grouped into structural indicators.To illustrate, waiting time [48], teamwork [34,36], and professionalism [34,36] were reported in the structure domain, but they are also involved in the process domain.
The common structure indicators were human resources for health, law and policy, infrastructure, facilities, and resources.Diagnosis (health assessment and/ or laboratory tests) and management (health information, education, and treatment) were some of the process indicators.Clinical outcomes (cure, mortality, defaulter, treatment completion, recovery from pain) and satisfaction were the common measurement indicators of the outcome dimension.The main indicators based on the Donabedian quality care model are summarised in Fig. 2.
The details of each indicator with a citation are also shown in the supplementary file (supplementary file 2).
Successes and challenges of quality of PHC
In addition to the identification of several indicators as determinants for the quality measure of PHC, the absence or presence of structure indicators, the appropriateness of process indicators, and the status of health service outcomes guide whether PHC is on a successful road map or struggling with challenges in the delivery of quality service.A similar level of perception between managers and clients on health care providers' competency and professional conduct and a similar perception of clients and health care providers on structural factors (e.g., Nigeria) [45], high-quality structure indicators in some countries (e.g., Iran) [43], lower cause-specific mortality, and a lower rate of hospitalisation due to chronic disease and pneumonia in high-income countries (e.g., the USA) [37] were achievements.Challenges to quality PHC include high mortality due to tuberculosis in low-income countries (e.g., Uganda) [46], geographical disparity of quality care (e.g., Ethiopia and Iran) [40,43], shortage of health care providers both in developed and developing countries, client and community engagement problems, lack of guidelines and providers' poor adherence to guidelines [40], provision of inadequate information to clients [46], and poor quality due to a high admission rate (e.g., a mental disorder in the rural USA) [32] (Table 1).Table 1 shows the successes and challenges of quality of care in PHC based on the World Bank country categories.
Discussion
This review summarised indicators, successes, and challenges of quality of care in PHC settings.Quality of PHC consists of an interaction of several quality indicators related to structure, process, and outcome, denoting physical and organisational characteristics where health care occurs and focusing on the care delivered to clients and the effect of health care on the status of patients and the population.The structure domain comprises health care resources, human resources, infrastructure, governance, law, policy, and guidelines.Providing preventive, professional, and ancillary services accompanied by professionalism was the common process indicator.Outcome indicators include mortality, cure rate, and treatment completion, behavioural change, and client satisfaction.
Quality of care indicators were identified.Some studies recruited quality indicators based on experts' and health care providers' perspectives [34][35][36] without community engagement.This may face feasibility, applicability, acceptability, implementation challenges, and a lack of comprehensiveness.For example, there was no structure indicator for geriatric pharmacotherapy [35].This could be solved when perspectives from clients, families, health care providers, and administrators are considered.It is known that community engagement, continuous feedback, government support, and active community Fig. 2 PHC quality indicators with their interaction based on Donabedian model involvement play pivotal roles in the quality issues of PHC [49,50], while a lesser client engagement decreased the quality of health care services [40].Additionally, only one review assessed all quality elements (efficiency, effectiveness, safety, people-centredness, timeliness, equity, and integration) using structure, process, and outcome components [28] despite the importance of assessing the six domains of health care quality [25].The Institute of Medicine has developed six domains of health care quality: safe, effective, patient-centred, timely, efficient, and equitable care [51].The current review relies on previous studies, which did not present all domain of quality.Therefore, assessing the full domain of quality of PHC services under structure-process-outcome will give critical evidence.
The relationship between structure, process, and outcome indicators was a mediation process [38].This was the direct and indirect relationship between structure, process, and outcome that worked when the outcome indicators were client satisfaction, coherence of integrated care, competence of nurses, and patients' confidence in nurses.Clients were satisfied when they attended health institutions during convenient time, waited a short time to receive care, and attended a clean and suitable health institutions (e.g., waiting areas and other infrastructure).This means that clients were satisfied before interacting with health care providers, which indicates the need for critical attention during rating the status of the quality of care in the absence of process through which the real services are provided to clients.Studies investigated structure factors as the direct determinants of client satisfaction [52,53].Similarly, outcomes such as coherence of care and patient confidence in health care providers were affected by interpersonal aspects, shared decision-making procedures, and clients own problems and feelings [54].
Challenges persist in improving the quality of PHC services.Disparity of quality care between different health centres [40,43] and a lack of structural inputs were reasons for the poor-quality care in low-income countries.There was also a low and varied quality of care between regions in middle-income countries due to the absence Table 1 The successes and challenges of quality service delivery in PHC
Challenges and/or unsuccessful progress Low-income countries (maternal and tuberculosis care)
• Folic acid supplementation, presence of measuring weight, accessibility and proper consultation time increased women's satisfaction in Ethiopia [41] • Acceptable level (0.6%) of tuberculosis treatment failure in Uganda [46] Lower-middle-income countries (general care and maternal service) • Clients and health care provider's similar perception to structural determinants in Nigeria [45].
• Satisfactory quality level of the structure dimension in the majority (86.4%) of the health centres; 95.4% of women were very satisfied with the services in Iran [43].
High-income countries (general care and pharmacy service)
• Lower population-level risk differences, lower cause specific mortality and lower rate of hospitalisations in the USA [37] • A positive impact of health support pharmacy service on outcome indicators, including clinical outcomes, humanistic outcomes, health behaviour change, community hub and impact on other professionals (sense of reassurance and operational efficiency) in Japan [34].
Low-income countries (general care, maternal, adolescent and tuberculosis services)
• Disparity of quality of care between health centres in Ethiopia [40] • Medium level of quality (measured by satisfaction) for structure (58.8%), process (46.4%) and outcome (47.2%) indicators in adolescent and youth-friendly services in Ethiopia [40] • Unavailability of adequate and trained health care providers, poor care engagement of adolescents and youths, and lack of guidelines, protocols and procedures and providers' poor adherence to guidelines in Ethiopia [40] • Only 55% of women were satisfied with ANC services in Ethiopia [41] • Inadequate information provision and health workers' poor attitude to other health care providers (their fellow) in Uganda [46] • Lower percentage of treatment completion (40.3%), lower cure rate (39.2%), high mortality (6.8%) and a high percentage of defaulted treatment (12.5%) in tuberculosis case management in Uganda [46] Lower-middle-income countries (general care and chronic disease services) • Different satisfaction level of patients and managers to accessibility of care (96.3 vs. 85.7),supply of critical drugs (92.9 vs. 100), availability of equipment (97 vs. 57.2),friendliness (92.4 vs. 71.4)and attending to patients (74 vs. 57.2) in Nigeria [38] • Managers and patients complain about the poor quality of care due to long waiting times in Nigeria [39] • Insufficient manpower (40.3%), lack of basic amenities (light, water supply and good roads) (40.3%), insufficient equipment (18.1%), insecurity and communal crises (15.3%) and poor attitude of healthcare providers and clients in Nigeria [45] • Low mean scores for structure (34.5), process (38.5) and outcome (65.6) in Iran [42] • Lack of structure indicators and inappropriateness of process indicators in Iran [42] Upper-middle-income country (general care and chronic disease services) • Patients' and managers' different satisfaction level on health care provider' coherence (97.4 vs. 85.7) in South Africa [38].
• Irregular pre-packing of drugs in South Africa [39] High-income countries (mental and chronic disease services, and general care) • Inappropriate use of restraints, catheters and psychoactive drugs in Canada [31] • High percentage of rural clinics lacking physicians and resources for preventive care of congestive heart failure, chronic obstructive pulmonary disease, diabetes, and bacterial pneumonia in the USA [37].
• Poor quality of rural mental health care in the USA [32] • Unrecognised impact of electronic health records on clinical outcome cross-developed countries [27] of support mechanisms, lack of coordination, problems in comprehensiveness and continuity of care [55][56][57], a lack of privacy and respect, an unsatisfactory pace of quality system development, and staff shortages [39,58,59].Most countries have national quality care initiative strategies towards UHC [6], but they are not equally proactive in implementing the strategies.They also have different quality implementation approaches.For example, Donabedian's system-based framework implementation is top-down in New Zealand and the UK, and bottomup in Germany [23] though further research is indicated whether the top-down or bottom-up approach resulted in better quality of care.Countries may also have varied levels and extents of adapting PHC to different models of care, which the included articles did not address.Some are a 'client circle of support' [60], a 'person-centred' approach [61,62], a 'conversation approach' [63], and 'making or using action plans' for PHC services [64].
Inadequate health workforces were understood challenge for poor quality care in low-income countries (e.g., Uganda) [46,65].For instance, the quality of ANC, adolescent, and youth-friendly service was low due to a shortage of adequate and trained health care providers.On the other hand, staff shortage was handled in such a way to do not interrupt the quality of care in highincome countries though workforce shortage was a challenge in developed countries.For example, the absence of physicians did not lower the quality of care in the USA [37].The availability of other structure indicators and the substitution of the deficient personnel by other health care professionals could maintain high-quality care.For instance, a nurse-led PHC provided care equivalent to that of care by physician in chronic disease management [66], improved clinical outcomes and quality of life, and enhanced patient satisfaction [67,68].The health workforce shortage between developed and developing countries might vary based on the width and depth of health care.For example, the chiropractic workforce is unknown in some developing countries, and its shortage is sometimes underreported due to a poorly organised and unavailable written job description.In most developed countries, it is in practise, people demand the services, and the shortage can be reported [69].Therefore, the health workforce shortage should be interpreted in light of the context.
Rate of admission was identified as a challenge for quality of PHC service delivery in rural area.For example, mental health care in rural settings was poor due to a lower chance of accessing appropriate care and an increasing admission rate in the USA [32].This might be due to clients wait longer until they are seen by a health professional, and they might suffer from pain of disease progression if timely intervention is not provided.
Another challenge was a debate on electronic health records as one review reported that electronic health records have no impact on clinical outcomes [27].However, another argument concluded that 'electronic medical records improved quality of care, patient outcome and safety by improving management, preventing medical errors, reducing unnecessary investigations, and improving therapeutic interaction among primary care providers and patients [70].Other studies also confirmed the importance of electronic medical records on quality of care improvement [71,72] though there is a suggestion for a future prospective study [73].
This review has some limitations.Articles included in this review were conducted based on Donabedian's quality framework.There may several articles have reported about quality of care.For example, there are factors that the current review did not address such as non-compassionate and unrespectful care can contribute to the low quality care because only 60% and 64% of health care providers provided compassionate and respectful care, for example, in Ethiopia despite caring, respectful, compassionate health care workers and quality included in the health care agenda [74,75].Similarly, in Uganda, a case study revealed that the national health system, overall working environment, national budgetary allocation to the health sector, and limited collaboration between health centres and hospitals are factors affecting the quality of health care [76].Additionally, the articles included in this review were published only in English.There are articles published in non-English languages; including those articles may allow us to see the quality of PHC care in other countries contexts.Furthermore, the search was conducted only in four databases (Web of Science, Scopus, EMBASE, and PubMed) and Google Scholar.Other databases (e.g., Cochrane Library) may have related articles.
Conclusions
Quality of care indicators varied according to the health care problems, which resulted in a disparity in the successes and challenges between developing and developed countries.Disparity in service coverage due to daily living conditions and mortality due to infectious diseases were more common in developing countries.On the other hand, quality of care problems due to chronic diseases were recorded in developed countries.Inadequate health workforce was a challenge in developing and developed countries as a structure component of quality care provision.The PHC system should ensure the presence of adequate health care providers, equipped health care facilities, compliance with health policy and laws, adequate financing, and enhanced community and client participation.Additionally, each country should implement national quality initiative strategies with appropriate monitoring and evaluation of performance in each structure, process, and outcome indicator.PHC quality improvement needs appropriate resources and infrastructure, and an adequate PHC workforce with skill mix.
|
v3-fos-license
|
2020-08-27T09:05:31.604Z
|
2020-03-21T00:00:00.000
|
220381602
|
{
"extfieldsofstudy": [
"Medicine",
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41377-020-00387-4.pdf",
"pdf_hash": "dba37a8c630a2c9be942bd6df8519f7b57c9be23",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:849",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"sha1": "4f5f1316e35874ba91452cfe6b6a008589fd7d70",
"year": 2020
}
|
pes2o/s2orc
|
Routing valley exciton emission of a WS2 monolayer via delocalized Bloch modes of in-plane inversion-symmetry-broken photonic crystal slabs
The valleys of two-dimensional transition metal dichalcogenides (TMDCs) offer a new degree of freedom for information processing. To take advantage of this valley degree of freedom, on the one hand, it is feasible to control valleys by utilizing different external stimuli, such as optical and electric fields. On the other hand, nanostructures are also used to separate the valleys by near-field coupling. However, for both of the above methods, either the required low-temperature environment or low degree of coherence properties limit their further applications. Here, we demonstrate that all-dielectric photonic crystal (PhC) slabs without in-plane inversion symmetry (C2 symmetry) can separate and route valley exciton emission of a WS2 monolayer at room temperature. Coupling with circularly polarized photonic Bloch modes of such PhC slabs, valley photons emitted by a WS2 monolayer are routed directionally and are efficiently separated in the far field. In addition, far-field emissions are directionally enhanced and have long-distance spatial coherence properties.
To develop valleytronic devices based on TMDCs, effective approaches to separate valleys in the near or far field are indispensable. One feasible way is to selectively excite valleys by utilizing different external stimuli such as optical and electric fields [14][15][16][17][18] , while the usually required low-temperature environment makes it difficult for practical applications. Due to the powerful ability of manipulating light, nanostructures [19][20][21] are also proposed to separate valleys [22][23][24][25][26][27][28][29][30][31][32] . For example, based on either the transverse spin momentum of surface plasmons 27,28 or the variable geometric phase of metasurfaces 31 , valley separation was reported to be achieved in the near or far field at room temperature. However, both the intrinsic loss of metal materials and the localized spatial distribution of resonant modes of nanoantennas limit efficient valley separation, leading to a low degree of valley polarization [24][25][26][27][28][29][30] . As a counterpart of metasurfaces, photonic crystals (PhCs) eliminate all these disadvantages due to delocalized photonic Bloch modes and low-intrinsic-loss dielectric constituents. In addition, these Bloch modes are found to have peculiar polarization properties. With these attractive properties, PhCs have been widely applied in various studies, such as bound states in the continuum [33][34][35][36][37] , topological valley photonics 38,39 , PhC lasers [40][41][42] , and spontaneous emission control of TMDCs 43 . However, to date, there are no reports of effective valley separation in TMDCs by using PhCs.
In this article, we demonstrate that two-dimensional alldielectric PhC slabs without in-plane inversion symmetry can be used to efficiently separate valley exciton emission of a WS 2 monolayer in the far field at room temperature. The valley exciton emission is routed with high directionality and a high degree of valley polarization, as shown in Fig. 1d. For this type of PhC slab, paired delocalized Bloch modes with different circular polarizations not only play a critical role in separating and enhancing directional valley exciton emission but also lead to spatial coherence properties of the emission field, which have not been discussed in past studies. Experimentally, the angleresolved photoluminescence (PL) results directly show efficient valley separation in the far field, with a degree of valley polarization up to 88%. Time-resolved PL measurements indicate a 75% enhancement of the exciton radiative rate. In addition, the double-slit interference results reveal that the spatial coherence length of the emission field for a WS 2 monolayer on a PhC slab without in-plane inversion symmetry is longer than 6 microns (29 microns in theory).
Principle of separating and routing valley exciton emission
Analogous to electronic band structures in solids, Bloch scattering by periodic artificial atoms of PhC slabs alters the dispersion relation of light in the slab, resulting in photonic bands 44 . Each optical state in photonic bands corresponds to a delocalized Bloch mode with well-defined energy and momentum. Modes above the light cone are radiative due to coupling to the free space 44 . For these radiative modes, their polarization states in the far field are strictly defined. The corresponding polarization states of radiative Bloch modes in an arbitrary photonic band can be further projected into the structure plane and mapped onto the Brillouin zone, defining a polarization field in the momentum space 33,34 . These polarization properties in principle could be used to control the radiation of luminescent materials. However, owing to high rotation symmetry, the polarization field is nearly linear in most PhC slabs 45 . As a consequence, the polarization states of those PhC slabs can only cover a belt near the equator of the Poincaré sphere 46 (a space to describe all polarization states, shown in Fig. 1b). With a large area including two poles not covered, it is useless for us to utilize these Bloch modes of PhC slabs to separate valley exciton emission of TMDCs. In contrast, as is known, broken inversion symmetry is of vital importance in the appearance of inequivalent valley excitons in TMDCs. Similarly, we recently reported that paired circularly polarized states with different chiralities emerge from vortex singularities after breaking the in-plane inversion symmetry of PhC slabs 46 , as shown in Fig. 1c. Then, in addition to the areas near the equator, the polarization states cover the whole sphere, including two poles of the Poincaré sphere, corresponding to polarization states with a high degree of circular polarization in momentum space. Therefore, this type of PhC slab with circularly polarized radiative states could be an ideal platform for us to separate valley exciton emission of TMDCs in the far field, as illustrated in Fig. 1d.
First, valley photons could couple to circularly polarized states with corresponding chirality and become separated in the momentum space. Second, these Bloch modes are delocalized and could be used in coherent emission 47,48 . The spatial coherence properties of the emission field lay the foundation for the directionality and highly efficient separation of valley exciton emission of a WS 2 monolayer. A detailed discussion is provided in Supplementary Material section 1.
Results and discussion
To demonstrate the existence of opposite circularly polarized states in momentum space, we designed an inplane inversion-symmetry-broken PhC slab and studied the transmittance spectra in theory and experiment, as shown in Fig. 2. The slabs here are made of silicon nitride (Si 3 N 4 , refractive index ∼2) and silicon dioxide (SiO 2 , refractive index ∼1.5). The thickness of the Si 3 N 4 layer is 150 nm. The thickness of the SiO 2 layer is 500 microns, which could be considered infinite compared to the wavelength of visible light. Square lattices of holes with a period a = 390 nm are etched in the Si 3 N 4 layer. To break the in-plane inversion symmetry, the shape of the etched hole in a unit cell is set as an isosceles triangle, with the height h and baseline length b of the triangle being equal (h = d = 250 nm), as shown in Fig. 2a. More details about the sample design can be found in Supplementary Material section 3.
We first simulated the angle-resolved transmittance spectra under σ + -polarized incidence by Rigorous Coupled Wave Analysis (RCWA), with the incidence plane along the Γ-X direction. The spectra are asymmetric, and there are some diminished regions on the photonic bands, indicated by blue arrows in Fig. 2b. These diminished regions correspond to the nonexcited states under σ +polarized incidence. Hence, those states in the diminished regions are σ − polarized. Changing the incident light to σ − polarization, the diminished regions switch to the other side (Fig. S1b). To show this effect experimentally, we fabricated samples using electron-beam lithography and reactive ion etching (for more details, see Methods). By using a homemade polarization-resolved momentumspace imaging spectroscopy system (Fig. S4), angleresolved transmittance spectra are measured (Fig. 2c), in accordance with the simulation. Both the simulated and experimentally measured results confirmed the appearance of optical modes with a high degree of circular polarization in our designed PhC slab. For comparison, we also researched the angle-resolved transmittance spectra of the PhC slab with in-plane inversion symmetry. Shown in Fig. 2d, the designed shape of the etched hole in the unit is a circle (diameter d = 210 nm). As expected, we did not observe asymmetric spectra under σ + -polarized incidence in either the simulation or experiment, as shown in Fig. 2e, f. When changing the incidence to σ − polarization, the transmittance spectra are the same as those in the case of σ + polarization (Fig. S1c). The results demonstrate that by breaking the in-plane inversion symmetry of PhC slabs, circularly polarized states emerge in photonic bands.
The large area of the WS 2 monolayer is grown on a Si/ SiO 2 substrate by the CVD process and then transferred onto PhC slabs. Both PhC slabs and part of the unstructured flat Si 3 N 4 substrate are covered (Fig. S10). To study the PL distribution in the far field, angle-resolved PL spectra are measured (Supplementary Material section 5), as shown in Fig. 3a-f. The detection plane is along the Γ-X direction, in accordance with the transmittance spectra measurement in Fig. 2. We selected σ + (σ − ) PL by placing a quarter-wave plate and a linear polarizer in the detection path (Fig. S4). Figure 3e, f shows the asymmetric σ + (σ − ) PL spectra of the WS 2 monolayer on the PhC slab without in-plane inversion symmetry. The σ + (σ − ) PL enhanced regions correspond to regions with a high degree of σ + (σ − ) polarization in photonic bands. Figure 3a, b shows σ + (σ − ) PL spectra of the WS 2 monolayer on a flat substrate. Figure 3c, d shows σ + (σ − ) PL spectra of the WS 2 monolayer on the PhC slab with in-plane inversion symmetry. Different from those in Fig. 3e, f, all spectra in Fig. 3a-d are symmetric for both σ + and σ − detection. From the abovementioned experimental results, we can draw the conclusion that, as shown in the asymmetric spectra, valley photons emitted by the WS 2 monolayer have been separated in the far field by PhC slabs without in-plane inversion symmetry. In addition, we performed time-resolved PL measurements at room temperature (Supplementary Material section 10). Compared with that for the WS 2 monolayer on a flat substrate, the exciton radiative rate, namely, the reciprocal of radiative lifetime, is enhanced by 75% when the WS 2 monolayer is on a PhC slab without in-plane inversion symmetry.
To further study the degree of separation in Fig. 3e, f, we plotted the angle-resolved σ + (σ − ) PL spectra for a single wavelength, as shown in Fig. 3g, h. The dotted line refers to 615 nm, and the solid line refers to 628 nm, which are also marked in Fig. 3e, f. We observed that σ + (red) and σ − (blue) PL maximums separately appear at different angles. The σ + and σ − PL peaks are separated by nearly 6 degrees at 615 nm and 3 degrees at 628 nm. For comparison, PL spectra on a PhC with in-plane inversion symmetry for corresponding wavelengths are shown in Fig. S5, with the σ + and σ − PL maximums overlapping at the same angle. We also show that the photoluminescence of the WS 2 monolayer on this PhC slab without in-plane inversion symmetry is highly directional. As shown in Fig. 3g, h, the full width at half maximum of the PL peaks (Δθ) is less than 3 degrees at 615 nm and 2 degrees at 628 nm. This result is due to the delocalized property of Bloch modes, leading to the long-distance spatial coherence property of the far-field emission by the WS 2 monolayer on PhC slabs. According to the Fourier relation between momentum and position, a wide distribution in the real space means that the mode is localized inside a small area in the momentum space. This effect corresponds to the small angle distribution of the far-field emission, i.e., the directional emission, and will be further discussed later in this article. For this reason, although the separation of σ + and σ − PL peaks is small, the valley exciton emission could still be efficiently separated in the far field. Further, we quantify the degree of valley polarization by P θ ð Þ ¼ I þ θ ð Þ À I À ðθÞ I þ θ ð Þ þ I À ðθÞ where I + (I − ) refers to the PL intensity with σ + (σ − ) polarization for a single wavelength, and θ is the radiation angle. The degree of valley polarization is plotted in Fig. S7, with the maximum degree of valley polarization calculated up to 84%. These results indicate that the PL of the WS 2 monolayer on the PhC slab without in-plane inversion symmetry is highly directional and has a high degree of valley polarization. Based on the measured angle-resolved σ + (σ − ) PL spectra of the WS 2 monolayer on the PhC slab without in-plane inversion symmetry, we mapped the PL intensity distribution of a single wavelength in momentum space, as shown in Fig. 4a-d. The upper (lower) row corresponds to 615 (628) nm. The PL spectra along different directions in momentum space were measured by rotating the sample in-plane relative to the entrance slit of the imaging spectrometer. The projected momentum k is calculated by k = k 0 sinθ (k 0 = 2π/λ is the wavevector of light in the free space, θ is the emission angle Then, we used P(k) to qualify the degree of valley polarization in momentum space, which is similarly defined by P k ð Þ ¼ I þ k ð ÞÀI À ðkÞ I þ k ð ÞþI À ðkÞ , as shown in Fig. 4e, f. Here, I + (I − ) refers to the PL intensity with σ + (σ − ) polarization for a single Fig. 4 Experimental measurement of PL spectra and valley polarization in momentum space. a-d σ + and σ − PL intensity distribution in momentum space at 615 nm (upper) and 628 nm (lower). These are for the WS 2 monolayer on the PhC slab without in-plane inversion symmetry. e, f Images of valley polarization P(k) in momentum space. The maximum calculated P reaches 88% wavelength. Experimentally, the maximum calculated P reaches 88%, as shown in Fig. 4f. Note that the maximum P did not appear along the Γ-X direction in momentum space. The result is as expected because the circular polarized states of the designed PhC slab without in-plane inversion symmetry are slightly shifted from the Γ-X direction in momentum space 40 . The sign of P(k) reverses at opposite sides of the momentum space, demonstrating the separation of valley exciton emission with different chiralities. In contrast, we also measured and calculated P(k) of the emission by a WS 2 monolayer placed on a flat substrate, and P(k) was negligible (Fig. S12).
In addition to valley-related directional emission in momentum space, we expected the spatial coherence property of emission by the WS 2 monolayer on the PhC slab without in-plane inversion symmetry. Young's double-slit experiments were performed, as shown in Fig. 5. The experimental setup is illustrated in Fig. 5a, and the working principle is based on Fourier transformation. The double slit is mounted on the real image plane inside the optical measurement setup to select radiation fields from two different positions on the sample. The radiation fields from these two positions intersect with each other on Fourier image 2 at the entrance of the spectrometer. Therefore, the spatial coherence properties on the surface of the sample could be directly detected in the far field. Changing the etched depth of the PhC slab, we were able to overlap the measured photonic band with the PL spectra of the WS 2 monolayer to obtain enough signal intensity. Interference fringes are observed in the angleresolved PL spectra along the Γ-X direction, as shown in Fig. 5b. The red-marked line is further plotted in Fig. 5c, showing the interference intensity distribution at 621 nm.
The fringe visibility V is calculated to be~50%, defined by V ¼ I max ÀI min I max þI min , where I max and I min are the intensities of adjacent maximums and minimums 49 . In this measurement, the real double-slit distance d is 120 microns. The scanning electron microscopy image of the double slit is presented in Fig. S13. The magnification of the real image is 20, so the effective double-slit distance on the sample is 6 microns. The 6-micron effective double-slit distance is almost ten times the emission wavelength, demonstrating that the measured spatial coherence length is larger than 6 microns. Moreover, the spatial coherence length could be calculated by λ Δθ in theory, which is widely used in optical coherence theory 50 . Here, Δθ is~0.0215 (1.23 degrees) at 621 nm (Fig. S14), and the calculated spatial coherence length is~29 microns. In comparison, no interference fringes are observed when the WS 2 monolayer is placed on a flat substrate, as shown in Fig. 5b, c. This result means that the far-field emission of the WS 2 monolayer on a flat substrate has no long-distance spatial coherence property. Hence, we reveal that the far-field emission by the WS 2 monolayer on the PhC slab without in-plane inversion symmetry has a long-distance spatial coherence property. This property of the PhC slab extends the coherence control on the PL of the WS 2 monolayer from temporal coherence to spatial coherence.
In summary, we proposed in-plane inversionsymmetry-broken all-dielectric photonic crystal slabs to route valley exciton emission of a WS 2 monolayer in the far field at room temperature. By breaking the in-plane inversion symmetry of the PhC slab, we observed paired circularly polarized states with different chiralities emerge from vortex singularities. Via coupling with those delo- The experimental results for the case of a 6-micron effective double-slit distance. The real double-slit distance d is 120 microns. The magnification of the real image is 20, so the effective double-slit distance on the sample is 6 microns. The upper panel shows the WS 2 monolayer on the PhC slab without in-plane inversion symmetry. The lower panel is for the WS 2 monolayer on a flat substrate, with the signal intensity shown at two-fold magnification. The detection plane is along the Γ-X direction. c The interference intensity distribution in b at 621 nm. The far-field emission of the WS 2 monolayer on the PhC slab without in-plane inversion symmetry has a long-distance spatial coherence property monolayer were separated in momentum space, and the exciton radiative rate was significantly enhanced. In addition, both the directional emission and the longdistance spatial coherence property benefit the applications of in-plane inversion-symmetry-broken PhC slabs to route valley exciton emission. In addition, our method could be extended to manipulate valley exciton emission of other TMDC monolayers. The ability of these PhC slabs to transport valley information from the near field to the far field would help to develop photonic devices based on valleytronics.
Sample fabrication
The fabrication of a photonic crystal slab The sample structure was two slab layers, with a thin silicon nitride layer on the silicon dioxide substrate. The silicon dioxide substrate was cut from a 500-micron-thick quartz wafer. Then, a silicon nitride layer was grown on a silicon dioxide substrate by plasma-enhanced chemical vapour deposition (PECVD). The thickness of the grown silicon nitride layer was nearly 150 nm, and the thickness could be tuned by controlling the deposition time. To fabricate the designed structure, the raw sample was spincoated with a layer of positive electron-beam resist (PMMA950K A4) and an additional layer of conductive polymer (AR-PC 5090.02). Then, a hole array mask pattern was fabricated onto the PMMA layer using electronbeam lithography (ZEISS sigma 300). The sample was further processed by reactive ion etching (RIE). Anisotropic etching was achieved by RIE using CHF 3 and O 2 . The patterned PMMA layer acted as a mask and was eventually removed by RIE using O 2 . The size of every designed structure is~80 × 80 microns.
Transfer process for the WS 2 monolayer
The CVD WS 2 monolayer on the Si/SiO 2 substrate was spin-coated with poly(L-lactic acid) (PLLA) before baking for 5 minutes at 70°C. Afterwards, a PDMS elastomer was placed on top of the PLLA film and then torn off. The composite was then attached to a glass slide and put under a microscope on a transfer stage. The PhC slab placed under the glass slide was aligned carefully using the microscope, and the glass slide was lowered to contact the PhC slab. The stage was heated to 70°C to improve the adhesion, and then, the glass slide was lifted with PDMS, leaving a WS 2 monolayer on the PhC slabs. After dissolving PLLA in dichloromethane, the WS 2 monolayer was finally transferred to the designed photonic crystal slabs.
Optical measurements Experimental measurements of time-resolved PL
Please see Supplementary Material section 4 for the schematics and discussions.
Measurement setup of the polarization-resolved momentumspace imaging spectroscopy system and double-slit experiment Please see Supplementary Material section 5 for the schematics and discussions.
Simulations
The transmittance spectra were simulated by Rigorous Coupled Wave Analysis (RCWA). The periodic boundary conditions were applied in the x and y directions. The polarization angle was set to π/4, and the phase difference was set to π/2 or 3π/2 to obtain circularly polarized incidence (the polarization angle 0 (π/2) corresponds to p(s) polarization). The Si 3 N 4 refractive index was set to 2, and the SiO 2 refractive index was set to 1.5. All the materials were considered to have no loss in visible light.
|
v3-fos-license
|
2018-12-11T00:59:47.979Z
|
2012-06-29T00:00:00.000
|
54636034
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.uio.no/nordina/article/download/346/380",
"pdf_hash": "f9a8ddcaf4cf462ce86330cef65f67a599ceb00d",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:851",
"s2fieldsofstudy": [
"Education"
],
"sha1": "f9a8ddcaf4cf462ce86330cef65f67a599ceb00d",
"year": 2012
}
|
pes2o/s2orc
|
Primary school student teachers ’ views about making observations
5(2), 2009 Abstract Scientific observation plays a central part in the formation of scientific knowledge and thus it has an important role in the teaching and learning of science. Despite its importance there are only a few studies that focus on the problems in making observations. The paper begins with the collection of factors effecting scientific observation. In order to find out primary school student teachers’ conceptions of scientific observation 110 student teachers were asked to write what things they connect to making scientific observations. For the majority of the student teachers making observations seems to mean in the first place just noticing things. Only about 30% of the student teachers connected earlier experiences and knowledge with observations and only 30% of the student teachers mentioned processing of information. To become efficient at it, student teachers need plenty of practice and experience of the different features of scientific observation.
Introduction
Observations play an important role in the formation of scientific knowledge.Thus making observations is central when pupils are taught the process skills (e.g.Harlen, 2000, Johnston, 2005).Furthermore, science curriculum documents reveal that student observation is held central to the learning of science.For example, US National Science Education Standards (Standard, 1996) emphasizes that "When engaging in inquiry, students describe objects and events, ask questions, construct explanations, test those explanations against current scientific knowledge, and communicate their ideas to others".Respectively, in the Finnish National Core Curriculum for Basic Primary school student teachers' views about making observations [129] 5(2), 2009 Education (2004) one of the objectives for pupils in grades 1-4 in Environmental and natural sciences, is "to learn to make observations using the different senses and simple research tools, and to describe, compare, and classify their observations".
In everyday life observation is simply seen as "looking at things".However, in science observations are used to generate further explanations and theories about observed phenomena; they require skills associated with collecting and interpreting data and are influenced by the observer's assumptions and domain knowledge (Haury, 2002).Many researchers have strongly stressed that it is impossible to learn science or more specifically to understand the nature of science just by studying the process skills as such.Learning about science -developing an understanding of the nature and methods of science, and an awareness of the complex interactions among science, technology, society and environment -includes essentially the unification of conceptual and procedural knowledge (Millar, 1989;Gott & Duggan, 1994;Hodson, 1996;Metz, 2004).This means that at school the process skills have to be introduced jointly in the connection of authentic inquiries or scientific investigations.On the other hand, in order to help prospective teachers in teacher education to communicate about the nature of science they have to be guided to recognize the many facets related to the process skills and how to use them in doing science.
Very few studies on learning and teaching science pay attention to problems pupils have in making observations and how the skill of observation will develop over the course of pupils' studies.Howes (2008) concludes that young children are good at observing if observing is defined as noticing and following behaviours or phenomena that are intriguing or important to them.However, to write about, or otherwise represent what one sees is not a skill that comes easily.She stresses that observing and recording are intimately connected in learning to do both.These ideas are close to the findings of Tomkins and Tunnicliffe (2001).They found that 12-year-old pupils' observations were largely based on salient features but that sustained observations may provide a base for clearer hypothesis making.They also asserted that 'pupil talk' or 'diary reflection' is of considerable learning value for it allows a formative juggling of the evidence and through that a seeking for meaningful pattern.Smith and Reiser (2005) describe a methodology for assisting high school biology students in the processes of observational inquiry and theory articulation.They stress that tracking the various actions that lead to final outcomes is necessary in order to help students understand the importance of accounting for causality during observations.Learners often ignore the causal, intermediate interactions that could be observed, focussing primarily on final outcomes (Kuhn, Black, Keselman, & Kaplan, 2000).Smith and Reiser (2005) suggest that in order to support student-directed observations teachers should provide students with structured tasks that facilitate complex analysis and reasoning around observed materials.These tasks should help students to understand that observation is not a goal in itself.It is a method of inquiry that provides data for articulating explanatory hypotheses and models.Park and Kim (1998) analysed high school students' responses to contradictory results obtained by simple observation.They found that the majority of students preserved their own preconceptions.This means that when students directly observe some experiments they may neglect, distort or reject the observed results.Learning does not happen automatically without learners' cognitive effort.According to Haslam andGunstone (1996, 1998) many high school science student teachers saw observation as a teacher directed process.However, in some cases this seemed to be a learned response, derived from coping with some teachers.Altogether, the impact of the teacher on students' ideas and beliefs about observation was strong.Observation was seen by students to be important to their learning.If the content associated with the observation was familiar, the observation was taken more seriously.Also students' interest in the topic affected their attention to the task.
Teachers' role as a facilitator of learning is gaining more emphasis.In the school environment teachers' awareness concerning the purpose of observations, the rules governing observations and Primary school student teachers' views about making observations [130] 5(2), 2009 the possibilities observations offer for pupils' learning, are crucial.It is the teacher whose interpretations set the framework to pupils' interpretations about their role as observers.However, the research literature has focused mainly on pupils' learning while there is rather little research focusing on student teachers.Our research orientation directs attention to improve student teachers' knowledge and skills in teaching science especially in the connection with practical work in primary school.The focus is on the complexity of observations.Observations have an essential role both in construction and verification of scientific models.Making observations is also the first step in doing investigations as it contains all the components of a science inquiry process.Scientific observation is one of the process skills like classification, measurement, making inference, prediction, recording, planning or communicating (see cf. Padilla 1990).
The research questions of this study are: (i) How do student teachers understand the skill of observation in the context of school science; and (ii) What features of the skill of observation are identified initially?
Views of scientif ic observation
Scientific observation Norris (1984Norris ( , 1985) ) has put forward a generalized theory of scientific observation defined mainly in terms of human intentions and purposes and thus not taking human perception into account.He proposes that scientific observation is inherently heuristic because it is best conceived in its function as an aid and guide to scientific discovery.In reporting something as an observation a scientist intends: (i) to report on some event or state of affairs which the scientist considers to have been reliably witnessed using some sensory apparatus; and (ii) to indicate that this report is to play a foundational role in building knowledge in the field in question.One should note also that scientific observation is a function of the current state of knowledge, it is theory-laden, burdened with interpretations and assumptions, and that observations are not infallible or beyond the possibility of doubt (see also Hodson, 1986).Every statement in science is in principle open to question.Furthermore, deciding whether or not a statement is a report of an observation must be done knowing the context of its production and the nature and intentions of its producer.
Observations play a fundamental role in scientific investigations.In some cases scientific observation is a rather simple activity -a matter of "looking at things" leading to concrete statements about the world like "it is snowing".In other cases scientific observation can be an extremely complex activity especially when used to generate further explanations and theories about observed phenomena.Then they require skills associated with collecting and interpreting data and are influenced by observers' assumptions and domain knowledge (Haury, 2002).Furthermore, as Hodson (1986) has warned knowing what to observe, knowing how to observe it, observing it and describing the observations are all theory-dependent.Scientific observations are not categorical statements about objects and events in the external world.They are rather reports of how things seem to the observer i.e. how the observer interprets them.Marking the distinction between what is doubtful and what is not doubtful is part of the motivation for science educators' emphasis on distinguishing observations from inferences and conclusions.Observations are to mark the beginning points of reasoning in the area of knowledge in question, the basis upon which other knowledge rests.
Thinking behind observations
Scientific thinking involves an interaction of conceptual and procedural understanding.Conceptual understanding is applied to facts and procedural understanding to skills.Procedural understanding is thinking-behind-doing.In the case of observing it includes, for example, the decisions that must be made about what to observe, how often and over what period.These two types of understanding are not mutually exclusive.Gott and Duggan (1994) emphasize that procedural Maija Ahtee et al. [131]
5(2), 2009
understanding is more than a matter of recalling and using skills.Likewise Warwick, Linfield and Stephenson (1999) draw a clear distinction between the concepts of 'process skills' and 'procedural understanding', the latter being related to the dialogue about evidence.Kuhn et al. (2000) conclude on the basis of their intervention study with 6 th and 8 th graders that a developmental hierarchy of skills and understanding underlies inquiry learning.
The main concern in the research of science teaching and learning has been the conceptual understanding whereas the procedural understanding has received considerably less attention.However, activities like observing, inferring, predicting, and controlling variables play a central role in the scientific research as well as in studying at school.Millar (1989) has argued that it is misleading to portray the method of science in terms of discrete processes as these are not linked by a set of rules and procedures into a method which will guide scientists on how to tackle a new problem.Scientific inquiry involves the exercise of skill for example in deciding what to observe or in selecting to which observations to pay attention.He has also stressed that it is scientific observing instead of mere observing that should be developed and promoted through school science.Furthermore, the exercise and development of these skills depend crucially on a basis of science content and concept knowledge.Millar (1989) emphasizes also the necessity of clarification the stages in developing these science skills.
Observing as a learning process
What happens when a person is making observations?According to the variation theory the starting point is the dynamical structure of awareness (Marton & Booth 1997, pp. 82-109).A person's awareness contains all his/her experiences.An experience is formed in the interactions between the person and a phenomenon.In order to experience something the person has to discern the target, to separate it from the background.This means that the person has to notice the visible and/or hidden features from the phenomenon and become aware of them.S/he discerns these aspects as entities or as details.Awareness can be guided to discern a certain target or some parts of the target while other parts remain hidden.The features of the target can be connected together or for example to a relevant feature of another phenomenon in many ways.The targets under observation can also alter very quickly.The target can vanish from awareness and it can be replaced with another thing that has originally been in the background.Even that awareness is a holistic experience from all the observations made in a certain situation some features may come forward and others may stay in the background.
When a person makes observations from a thing or a phenomenon s/he experiences a connection between a certain feature and its meaning.S/he forms an idea about the thing.This idea is a new state of awareness.Different people pay attention to different features when they are making observations about a target.At that moment they have also different knowledge and ways of thinking so that they form different conceptions (Marton & Booth, 1997;Marton, Runesson, & Tsui, 2004).Scientific observation is closely connected to procedural and conceptual understanding and in this way it is influenced by pre-existing knowledge and earlier experiences.Through processing new knowledge and skills are formed.However, the working memory with its limited capacity and visual information-processing theory set limitations to observations (Sweller, 1994).
While observing an object or a phenomenon one uses all senses or some equipment in order to identify similarities and differences as well as patterns in and between objects and phenomena.At the same time, when one becomes aware of something s/he will connect to the thing that s/he is observing some meaning that is activated simultaneously.The meaning has been formed on the basis of his or her earlier knowledge or experiences (Marton & Booth 1997).This means that one starts to interpret observations or sequences and patterns in phenomena that are being observed using the information that has activated in the working memory.
Primary school student teachers' views about making observations
[132]
5(2), 2009
During a science lesson also other factors such as motivation, context or the pupil's perceived expectation have a significant effect on students' performance.Motivation is an important factor in the observation process, because it affects to the orientation towards the situation and to the observation process itself.Therefore, the situation should somehow increase curiosity or feeling of autonomy or should be personally meaningful (Deci & Ryan, 2002).Also, affective and emotional factors have to be taken into account.
In Figure 1, we have collected the main characteristics of scientific observation from a teacher's point of view when s/he is trying to improve teaching observation.In the first place, it is based on the four questions: What to observe, how to observe, how to treat observations, and what personal factors affect observations.Bransford, Brown and Cocking (2000) describe how internal representations can be built up through many opportunities for observing similarities and differences across the observed phenomena.Consequently the goal of these observing activities is to help students build internal representations -information stored in the memory that students can retrieve to generate inferences, solve problems, and make decisions.The nature of memory provides suggestions to how observations are processed in the working memory and stored in the long-term memory (Rapp & Kurby, 2008).
We have left out conceptual thinking that is connected with the concepts and theories related to the subject of observation.We have only briefly referred to the learning environment i.e. to the social, psychological and pedagogical contexts in which learning occurs and which affect students' attitudes and beliefs.
Subjects
In Finland primary school teachers are educated to teach all subjects except foreign languages in primary school at grades 1-6 (pupils 7 -12 years old), including mathematics and science.They are educated in eight universities in Finland in master's degree level programmes requiring 300 credit points (cp.).The credits are in accord with the European Credit Transfer System (1 ECTS = 1 cp.= 27 hours work).The primary student teachers belong to a select population; only one in fifteen applicants passes the entrance examination of the primary teacher education programme.The student teachers are mostly female and they are all versatile, talented persons.
The primary school student teachers participating in this study had started their second year in the autumn just prior to the research.During their first year they had undertaken basic pedagogical studies and some didactics studies like in Finnish and mathematics.They had not yet started their training in school.The second year studies, basic courses in biology didactics (3 cp.), chemistry didactics (3 cp.) and physics didactics (3 cp.), focus on the teaching of the basic concepts and models of science.One of the aims within the courses is to help teacher students to understand explanatory models appropriate for school pupils on certain developmental level.Besides that, it is important that students understand that biology, physics and chemistry are experimental sciences with special characteristics as school subjects.
Data gathering
The data was collected during the biology didactic course by asking the primary school student teachers (N = 110) the following questions: 1. What things do you think are connected to making observations?2. What do you think is the skill of observation?3. What kind of difficulties do you think you yourself have in making observations?
The aim of the first question (What things do you think are connected to making observations?) is on the one hand to get the students to concentrate on thinking about making observations and on the other hand to find out what features of making observations initially come to their mind and also how many different properties related to making observations they recognize.With questions 2 and 3 we aim to find primary student teachers' conceptions on the skill of observation.
Data analysis
The analysis used is mainly qualitative and it can be best described by inductive content analysis (Patton, 2002).First, student teachers' answers were read through many times.During the reading certain patterns were abstracted.All the team members are teacher educators with the same kind of theoretical background from science education.When different categories started to be formed each category was again read separately and inconsistent answers were taken away and perhaps moved to other categories.Also this procedure was done several times.After this the categories that contained similar ideas were read together.The patterns were not exactly determined so that after several discussions with the team members the final few categories with some subcategories were formed.All responses were carefully categorised together with their frequencies.In the final form the exact classification has been left out and only the full percentages are given.
Question 1: Things connected to making observations
The four categories found from the students' answers are given in Table I together with the percentage of the responses.The categories are listed according to the decreasing number of responses.The contents of these categories are then explained and some answers are given as examples.
Primary school student teachers' views about making observations [134]
5(2), 2009
The mean of the number of categories per student teacher was 2.2.A quarter of the student teachers' answers contained material from three categories, and also a quarter wrote only about one subject i.e. mainly about the target of the observation or that the observations are made with senses.About 10% of the student teachers' answers contained material from all the four categories.
Observations are made about entities, details, and changes
Altogether about 70% of the student teachers mentioned some object in making observations.About half of them wrote about properties, phenomena, events but only a couple mentioned changes.One quarter of them spoke about making observations of the environment, nature, or the world.Another quarter referred to the object very vaguely talking about the thing, the object or about a stimulus.
Observing is watching and examining things and phenomena.Exploring different properties of the object is surely the main point in making observations.In this way one can exclude impossible alternatives or map the object from general features to details.
I think that watching events in the surrounding nature is connected to making observations.Details are raised from the entity.One will learn new thing by making observations.
Observations are made using senses and apparatus
About two-thirds of the student teachers wrote that observations are made using senses.Only about 5% of the student teachers mentioned that also equipment like the microscope and telescope can be used in making observations.Human beings observe the world with all their senses.When observing they look, listen, smell, and sometimes taste and touch.Maija Ahtee et al. [135]
5(2), 2009
Observations are influenced by observers' characteristics Almost a third of all the student teachers saw that interest and concentration are important in making observations.Half of these student teachers mentioned interest, motivation or curiosity whereas the other half spoke mainly about concentration, attention, carefulness and also empathy.
To make observations the following skills are needed: patience, concentration, working memory and creativity to look from the right place.
A fifth of all the students commented on the effect of earlier experiences and knowledge on observing and on observations.These comments varied from simple statements to deeper analyses.A pupil's own world view as well as his/her constructions related to knowledge, skills and values is central in making observations.A pupil's own internal models will direct his/her observations.On the other hand, observations will on their part revise his/her internal models.
In teaching it would be important to start from pupils' own observations and analyze and apply them meaningfully in everyday practice.
A little less than a fifth of these students wrote about both the emotional properties and the necessity to have knowledge about the target.Observer's earlier knowledge is connected to making observations.Observations may rest on earlier knowledge or deviate from it.In making observations one has to be interested in the things around oneself or be interested both directly and purposefully in the object under observation.
Observations are recorded, processed, and reported
A third of all the students wrote about the processing of the information that could be obtained from observations.The information was processed using the basic skills like identifying similarities and differences, classifying, interpreting and making conclusions.Here the word information is used instead of knowledge as usually the students' real understanding as to the nature of science could not be inferred on the basis of the answers.In most of these answers the terms interpretation and conclusions were mentioned.Only a small number of student teachers mentioned recording and reporting.
Observations are directed to the properties of a thing or a phenomenon, or its behaviour, the environment and its effect on the thing/phenomenon.The observer can make notes and draw conclusions on the basis of his/her observations and make his/her own conception on the thing/phenomenon in question.
A more complete picture and conception can be formed from the observations like constructing a jigsaw puzzle.This gives a firm basis for going deeper into the matter and for the formation of concepts.
Only two student teachers spoke about making investigations.However, the basic process skills like describing observations with words, classifying, finding similarities, differences and patterns could be found from the answers.
In making observations one has to watch the different properties of the object and notice the differences and similarities compared to other objects.One has to know how to perceive entities and small details.Also the skill to classify the things that one sees belongs to making observations.
Question 2: The skill of observation
The following five categories were found from the answers to the second question: What do you think is the skill of observation?The categories are listed according to the decreasing number of responses.
Primary school student teachers' views about making observations
[136]
5(2), 2009
Curiosity and open mind About 40% of the student teachers wrote that the skill of observation means a curious and unprejudiced mind looking at things from many sides actively but objectively, seeing things "with new eyes".More than a third of them spoke about concentration and attentiveness and some emphasized sensitivity.Because the questionnaire was given to the student teachers in the beginning of the biology didactics course it is understandable that many students connected their answers with nature and plants.
To my mind curiosity, ability to work hard and opportunity are connected to the skill of making observations.In the classroom a pupil may not be eager to go into the nature but when s/he sees that the teacher and other pupils are interested in looking at what there is in the forest s/he also will become interested and will start to work harder.
Essential things details and entities
Also about 40% of the student teachers' answers were classified in this category.In half of them it was stated that the skill of observation means to pay attention to essential things.A third of them spoke about noticing details or characteristic features whereas every sixth spoke about perceiving entity.
To notice essential things, those that differentiate the object under observation from other things.
To find details and special features.
To react to the changes in the environment and connect single things into an entity.
Thinking processes
About 20% of the student teachers wrote about skills to produce causal relations.
The skill to observe includes also the development of thinking processes needed to treat observations.
When a person can make observations s/he can also direct his attention and analyze at least on some level information received through the senses.
Watching the nature and the environment
Nearly 15% of the student teachers described the skill to observe as a skill to notice things around oneself or in the nature.
The skill of observation means to make observations and watch the surrounding world and its different parts.
The skill to make observations includes recognition and classification of different types of organism.
Investigations
About 10% of the student teachers related the skill of observations with making investigations.The skill to make observations is a skill to investigate, analyse and report, to keep eyes and ears open, and understand similarities, differences and the many-sidedness of some things.
The majority of the student teachers' answers were classified only in one category.Only 25% of the answers contained material from two categories.Less than 10% of the student teachers' answers could not be classified to these categories.In most of these answers the student teachers spoke only about senses.Also some answers contained material outside the categories.A fifth of the student teachers mentioned the importance of senses.Some of the student teachers pointed out that the skill of observation develops when it is used and trained.
Question 3: Student teachers' own difficulties
The following four categories were found from the answers to the third question: What kind of difficulties do you think you might have in making observations?The categories are listed according to the decreasing number of responses.
5(2), 2009
Lack of knowledge Nearly half of the student teachers stated that their difficulties in making observations were due to not having enough background knowledge about the matter to be observed.About a third of these students expressed that they were not able to pay attention to the right, essential things.We have interpreted this that they did not have enough knowledge to know what is essential and therefore included these answers to this category.A fifth of them referred to their scarce knowledge especially in science.My difficulty to make observations is due to almost nonexistent knowledge about the background.
I have almost no basic knowledge about different species.I know very little about both plants and animals.I can make observations only about their appearances but I do not have conceptions to which I could connect my esthetical observations.
Lack of concentration
A third of the student teachers thought that their difficulties to make observations were caused by lack of concentration, impatience or carelessness.Behind all these factors may be a lack of interest as also some of the students emphasized.Lack of concentration may cause difficulties in making observations because one has to observe intensively the situation.Lack of interest, being inattentive may make observing more difficult.
Own conceptions and habits
A fifth of the student teachers figured that they were "set in their own ways" so that in observing familiar things they do not really observe the object but look for things that they are used to paying attention to.Many of them answered the question from a general point of view and not necessarily from the science point of view.In slightly more than a third of the answers in this subcategory the students wrote about preconceptions and stated that a person's own preconceptions can direct observations so that to change one's conceptions may be difficult.
If one is too experienced and always acts as before it may happen that s/he only carries on as before and never makes any observations from her/his surroundings.One day s/he then will notice that "a new house has been built here", even though the house has been there for a long time.
Not enough critical thinking.I accept too easily as facts that have been told to me and do not question them.Therefore I do not observe, ponder and analyse things enough.
Lack of practise
About a sixth of the student teachers stressed that one has to become accustomed to making observations, have practice in observing.Some students wrote that their difficulties are related to their uncertainty about their own skills.There were, however, a few students who stated that they have no difficulties whatsoever in making observations.One has to get used to making observations so that one can describe the thing with the right words.
Less than 10% of the student teachers' answers could not be classified to these categories.These students spoke about recognition of species or lack of time or colour blindness.About a fifth of the answers contained material from two of the categories, the other answers contained material from only one category.
Discussion and conclusions
Our main aim in this study was to find out what primary student teachers understand by making observations.As it can be seen from Table I less than 30% of the student teachers, at least spontaneously, seem to connect earlier experiences and knowledge with observations.Furthermore, when the student teachers described the skill of observation in their answers to question 2, only 20% of them mentioned thinking processes.The students, who wrote about the information based on observations and how it could be processed, did have the view that making observations is a holistic event in which all parts are simultaneously activated and there is continuous feedback between them.For the majority of the primary student teachers making observations seems thus to mean in the first place just noticing things.They may therefore not pay enough attention to the essential role of observations in construction and verification of scientific models.They may not spontaneously start wondering and questioning what is behind the observations, and how to explain them.However, all practical work in science involves interactions of procedural and conceptual understanding (Gott & Duggan 1994).The development of procedural understanding, on the one hand, and, conceptual understanding, on the other, can be likened to a double helix, both developing in linked spirals (see Johnston 2005, pp. 30-31).It seems to be important to find out what difficulties the primary student teachers have for example in forming questions in regards to showing a phenomenon to pupils.
In answering question 1 only about 5% of the student teachers mentioned that also equipment like microscopes and telescopes can be used in making observations writing mainly that observations are made using the senses.This proportion can be compared to 20% of the similar mentions of the upper secondary science students who answered the similar questionnaire (Ravanko, Hakkarainen & Ahtee, 2009).Norris (1985) has argued that the misconception about human sense perception playing a dominant role in scientific observation is due to the historical development of science.So it is time to change this conception because the use of the microscope and telescope open up completely new worlds in science for children.In preschool and at the beginning of primary school sensory observations have obviously a central role whereas later more emphasis should gradually be placed on pupils' skills to use different equipment.This raises another research question about how ready and confident prospective primary teachers are in using different equipment in their teaching, or is the use of equipment one reason why prospective class teachers may avoid science teaching (Appleton, 2003).
The student teachers used terms like motivation, interest and also empathy to describe the factors which arouse and maintain behaviour towards attention and concentration within an individual making observations.Howes (2008) has pointed out that students are good observers typically only if the phenomenon is intriguing or important to them.Young children's science learning is based mainly on intrinsic motivation such as curiosity.Hidi, Renninger and Krapp (2004) point out that interest caused from intrinsic motivation is optimal for learning science.Moreover, interest can develop progressively (Hidi & Renninger, 2006) and it can be self-regulated by pupils (Sansone & Smith, 2000;Sansone, Wieb & Morgan, 1999).However, having started school education, the curriculum replaces children's natural curiosity directed learning.
From the science point of view it is important to understand that observations are not only categorical announcements about the objects and phenomena.In the first place they are the observer's report about what the objects and phenomena seem to be from his/her point of view.Therefore at school pupils have to present their observations by telling about them, or give them in the form of tables and graphs.Only a couple of the student teachers mentioned reporting about observations.However, it is easy to omit mentioning things that to your mind are not relevant from the perspective of the questionnaire.Therefore, it could be interesting to study further primary student teachers' views about the importance of reporting one's observations.
5(2), 2009
The student teachers mentioned observing details, entities or nature but only a couple of the student teachers mentioned changes.The purpose in making observations is to collect data from different targets.There is a difference depending on whether the target is an object, a phenomenon or the whole surrounding (Gott, Duggan & Roberts, 2008).Observations start from looking at similarities and differences and progress to making full investigations.To recognize the features of an object or a phenomenon demands paying attention to details but at the same time one has to recognize the whole entity where the object is or the phenomenon happens.At the same time when a target is being observed the observer also starts to analyse the data and compare it with pre-existing knowledge.Therefore, observation enables the observer to identify patterns or causal relationships, or to check ideas.According to the variation theory (Marton & Booth, 1997) every object or phenomenon has its own critical features that distinguish it from other objects or phenomena.In order to be able to create proper explanations one has to observe how the critical features vary in a certain phenomenon.Different people will discern a phenomenon in different ways depending on their own experiences and awareness (Marton & Booth, 1997).They will observe different features as well as discern the whole in its context and distinguish the parts from the whole (Marton, Runesson & Tsui, 2004).People differ according to what they know and what they are interested in and therefore they will pay attention to different things.People will become aware of and form a conception about the target they are observing when they connect to a certain feature the meaning that is activated simultaneously on the basis of their earlier knowledge and experience.They start to give explanations to the phenomenon (Marton & Booth, 1997;Marton & Tsui, 2004;Runesson, 2006).
When the student teachers tell about their own difficulties in making observations, the majority of the teacher students speak about lack of interest, own habits and lack of practice.This is in accordance with the study by Johnston and Ahtee (2006) in which it was found that Finnish students come to primary teacher training programmes with a negative attitude and apprehension about physics teaching.
Scientific observation is a complex process.It forms part of a whole investigation, and its meaning is closely related to the purpose of the investigation.Furthermore, conceptual framework cannot be isolated from observation as it guides the selection and interpretation of the observations to be made.When disciplinary knowledge guides perception of observed phenomena the teacher has to help students understand how to detect significant features during their observations and how to compare these against other observed examples to understand similarities and differences across behaviours (Driver, 1983;Hodson, 1986).This means that normally tacit, expert strategies should be made explicit also to student teachers (cf.Smith & Reiser, 2005).
Figure 1 .
Figure 1.Things connected to scientific observation.
Table I .
Distribution of the student teachers' answers in question 1: What things do you think are connected to making observations?(N = 110).
|
v3-fos-license
|
2020-04-23T09:02:46.572Z
|
2019-06-30T00:00:00.000
|
218903597
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://opthalmology.medresearch.in/index.php/jooo/article/download/59/109",
"pdf_hash": "417a96c7c455b6072fe25c5cfe79ae0c2f800f80",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:852",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "58521f4c4151257fa9cbd78bacf9027a550a6206",
"year": 2019
}
|
pes2o/s2orc
|
Prevalence of spectacle use and amblyopia among young people presenting to a tertiary care institution of Bihar
Introduction: This study intended to estimate the prevalence of spectacle use and distribution of amblyopia in young people presenting to the Ophthalmology outpatient department of a tertiary care institution of Bihar, India. Materials and Methods: This hospital-based prospective study was conducted over a period of 2 months amongst patients aged 1024 years with refractive errors (in one or both eyes), whose refractive status, use of spectacle sat about the time of checkup and presence or absence of amblyopia were recorded. Results: Of 1482 young people, 335 (22.6%) were already using spectacle sat about the time of check-up. Of these, 276 (82.4%) had myopic errors in one or both eyes, 58 (17.3%) had hypermetropic errors in one or both eyes, and one (0.3%) had mixed astigmatism in both eyes. Of the 1257 (84.8% of all) young people whose both eyes were ametropic and included for consideration, 186 (14.8%) were found to have anisometropia and of these, 78 (about 42%) met the criteria for amblyopia. Overall 106 (about 7.2%, 95% CI 6.0-8.7) young people were found to be amblyopic (odds ratio = 54.7, p<0.0001). Conclusion: Only a small proportion of young people with refractive errors presenting to our tertiary OPD were spectacle-users, indicating inadequacy or lack of utilization of refraction facilities or motivation amongst patients. A strong association of anisometropia with amblyopia was observed. These findings emphasize the need for early detection and correction of refractive errors through community and school-based screening programmes to prevent amblyopia.
Introduction
The World Health Organization (WHO) identifies uncorrected refractive errors as a major cause of moderate to severe visual impairment worldwide, amounting to about 53% of all causes of visual impairment. About 12 million children aged less than 15 years are visually impaired due to refractive errors [1].'Vision 2020: the Right to Sight'is a global initiative of the WHO and International Agency for Prevention of Blindness (IAPB) to eliminate the main causes of avoidable blindness by giving priority to refractive errors, among other entities [2].
Majority of studies enquiring into the prevalence of refractive error and amblyopia are population-based, and none focuses specifically on the 10 to 24 years age group. Rohul found that about 86% refractive errors were isometropic and 14% anisometropic [3]. It has been recognized in numerous studies that with anisometropia, moderate to high hypermetropia or astigmatism, there is a strong association of amblyopia especially in early childhood [4][5][6][7][8][9][10][11]. Weakley et al found that anisometropia contributes significantly to the burden of ocular morbidity, being closely associated with amblyopia [4].
For children ≥10 years of age, the problem of amblyopia leads to a worse visual prognosis. Lin et al observed that children do not complain of defective vision, and may not even be aware of their problem [12]. They adjust to poor eyesight by sitting near the blackboard, holding books closer to their eyes, squeezing the eyelids and even avoiding work requiring visual concentration and this warrants early detection and treatment to prevent impaired scholastic performance and permanent disability. Thus, various
Original Research Article
Tropical Journal of Ophthalmology and Otolaryngology Available online at: www.medresearch.in 152|P a g e researches have emphasized the importance of early detection and treatment of amblyopia [8,13].
Population based studies about the prevalence of spectacle-use in our region are mostly derived from school-screening data, and therefore it is not possible to determine the burden it poses to the tertiary eye care system [14]. A hospital-based study from West Bengal found that only 40 of 255 (only about 16%) children aged 5-15 years with refractive errors were using spectacles, whereas the rest were newly diagnosed at their tertiary institution [15].
A study from Uttarakhand found that only about 22% of their subjects aged 5-15 years were using spectacles previously [16]. In the Rapid Assessment of Refractive Errors (RARE) Study from Andhra Pradesh, a quarter of those with uncorrected refractive errors did not feel the need for correction because they did not face problems in their day-to-day tasks [17]. In addition, the Andhra Pradesh Eye Disease Study found that nearly one-third of the subjects with correctable visual impairment discontinued the use of spectacles, either because they felt the prescription was wrong or because they felt the spectacles were uncomfortable [18].
The reasons for young people presenting to the tertiary OPD for refraction and prescription of glasses have been enumerated in our earlier publication [19].
However, no published studies report the prevalence of spectacle-use in outpatient attendees and the hospital burden of amblyopia in patients of refractive errors in the age group 10-24 years in our region. Hence, this study intended to estimate prevalence of spectacle use and amblyopia in young people aged 10-24 years [20], whopresent to the OPD of a tertiary care institution of Bihar. The objectives were to determine the proportion of young people who are already using refractive correction vis-à-vis those newly diagnosed at the tertiary OPD as having a refractive error, and to estimate the proportion, severity and laterality of amblyopia in young people with refractive errors.
Material and Methods
Study design: This study was a hospital based, prospective, descriptive study undertaken in the outpatient department of Ophthalmology of Patna Medical College Hospital, a tertiary care institute in Bihar, India.
Ethical consideration & permission:
The study conformed to the principles of the Declaration of Helsinki. The Indian Council of Medical Research (ICMR) as well as the Institutional Ethics Committee approved the study protocol. Accordingly, informed consent notes presented to the subjects elucidated the purpose of the study, clearly mentioning to them that the study would report only the variables related to their refractive condition, and not their identities or other confidential information.
Sampling methods and sample size calculation: Patients in the age group of 10-24 years presenting with refractive errors to the out-patient department of ophthalmology during the study period were taken as study subjects.
Inclusion criteria: Routine patients presenting with headache and/or visual disturbances were investigated for the presence of refractive error in their eyes. Consenting individuals in the age group of 10-24 years with diagnosed refractive error (in one or both eyes) were included in the study sample.
Exclusion criteria: Young people presenting with bilateral organic defects such as strabismus, corneal opacity, opacity of the lens, and choroid and retinal disorders were excluded [21]. Eyes with unilateral organic defects were also excluded from consideration.
Data collection procedure: Data were collected on all working days (Monday through Saturday) during the study period using a pre-designed structured interview schedule. Information was recorded on the refractive status of the patient, whether the patients were already using spectacles or any other form of refractive correction, and whether amblyopia was present or not.
Refractive errors were classified as shown in Table 1. Amblyopia was considered as the cause of visual impairment in eyes with best corrected visual acuity of 20/40 or worse and no apparent organic lesion, so long as one or more of the following criteria were met [22]: Amblyopia was further classified as moderate and severe [23], according to best-corrected visual acuity (BCVA) in the worse eye at presentation, unilateral (anisometropic) and bilateral (isometropic i.e. bilateral ametropic), as shown in Table 1. Apart from those who were actually wearing refractive correction, those who had lost or broken their spectacles within the past two weeks were also considered as using spectacles at about the time of check-up.
Results
The study sample consisted of 1482 young people aged 10-24 years who were diagnosed as having refractive errors. Of these, 335 subjects (23%) were already using spectacles (or other forms of refractive correction) at about the time of check-up while 1147 (77%) were newly diagnosed with refractive errors at the tertiary OPD. There were 210 patients with unilateral ametropia, and 15 patients who had ametropia in one eye, and the other eye affected from or lost to organic diseases (Table 2).
Of the 335 spectacle-users, 276 (82.4%) had myopic errors in one or both eyes, 58 (17.3%) had hypermetropic errors in one or both eyes, and one (0.3%) had mixed astigmatism in both eyes. No patient was found to have myopic error in one eye and hypermetropic in the other. On the other hand, among 1257 young people whose both eyes were ametropic and included for consideration, 186 (about 14.7%) were found to have anisometropia and of these, 78 (42%) met the criteria for amblyopia. Likewise, among the 210 young people who were found to be emmetropic in one eye and ametropic in the other, 31 (14.7%) had anisometropia and 12 (38.7%) had amblyopia. Overall, 106 (about 7.2%, 95% CI 6.0-8.7) young people were found to be amblyopic. Amblyopia was found to be moderate in about 88% and severe in about 12% ( Table 2). Among the 217 subjects with anisometropia, about 41.5% had amblyopia, and the association of anisometropia with amblyopia was statistically significant (p<0.001, odds ratio = 54.7, Table 3). Of 106 amblyopes in total, 90 (about 85%) had unilateral (anisometropic) and 16 (15%) had bilateral ametropic amblyopia. The duration of use of spectacles was 6.8±1.24 years (mean ± SD, 95% CI 6.67-6.93).
Discussion
This study was conducted on a large sample of 1482 young people aged 10-24 years with refractive errors presenting to the OPD of a tertiary care institution of Bihar, India. Only about 23% (335 of the 1482) young people of the study sample were already using spectacles (or other forms of refractive correction) at about the time of check-up, concurrent with the Uttarakhand study [16]. In other words, only two out of nine patients were spectacle-users compared to seven out of nine being non-users, a huge difference for a tertiary hospital indicating inadequate refraction facilities at lower levels of the organizational framework or a lack of their utilization. This was less than that in the APEDS [18] where the prevalence of spectacle-use was 29.5% overall, but far more than the West Bengal study [15], which found it to be 16% in the 5-15 years age group, and a Saudi Arabian study [24], which found it to be only 9.4% in the 6-14 years agegroup.
The APEDS was a community-based study and it may be derived from these observations that the rates of detection of refractive errors as well as the coverage of and compliance to spectacle-use are low in Bihar as compared toTelangana/Andhra Pradesh. Of the 335 spectacle-users, over 82% (276 of 335) had myopic errors in one or both eyes and less than 18% had hypermetropic errors in one or both eyes (Table 2). This was perhaps because the majority of previously undiagnosed patients were hypermetropic, mostly with good uncorrected/presenting visual acuity and had not visited other centres previously, or had been prescribed spectacles at previous examinations but not used them, or had been under-corrected. Similar facts reported in the RARE study and APEDS from Andhra Pradesh [16,17], stated many patients with uncorrected refractive errors did not feel the need for correction because they did not face major problems in their day-to-day tasks, or discontinued the spectacles due to incorrect prescription or discomfort. The prevalence of anisometropia found in the present study was approximately 15% in the absence of organic disease. Rohul et al reported a similar prevalence (about 14%) from Kashmir [3], whereas Mittal et al reported a lower prevalence (about 7%) from Uttarakhand [16].
Amblyopia was observed in about 42% of anisometropic young people, amounting to about 7% of the total, which is 10 times of what was reported in a study from Bihar's neighbouring country Nepal [6]. The association of anisometropia, about six folds, odds ratio=54.7, p<0.001 (Table 3) observed in the present study is in strong concordance with several previous studies [5][6][7][8][9][10][11]. It was observed that about 85% of amblyopes had anisometropia (90/106) and about 15% had bilateral ametropic amblyopia (16/106). Sapkota et al from Nepal observed that 29% of their subjects had bilateral amblyopia due to high ametropia [6]. In a study by Rizvi et al, the frequency of amblyopia was found to be 74% in the anisometropia group and anisometropes were found 2.5 times more likely to have amblyopia as compared to ametropes [11]. In the present study, amblyopia was found to be moderate in about 88% of subjects. These findings emphasize the need for early detection of refractive errors through community and school-based screening programmes. Researchers in Sweden and the United Kingdom have suggested screening at the age of four to five years, once the child begins his/her education [8,13].
The present study has served as an initial inquiry into spectacle-use and amblyopia among the young patient population presenting to tertiary institutions. Further multicentric hospital-based studies would doubtlessly provide greater insight into underlying problems leading to poor spectacle compliance and the burden of amblyopia.
Conclusion
Amblyopia was present in about 7.2% of all young adults who were already using either spectacles or other corrective measures for their refractive errors and in about 42% of anisometropes presenting to the OPD of a tertiary care institution. Amblyopia was found to be moderate in about 88% of subjects. This scenario demands immediate attention as the visual prognosis for uncorrected refractive errors in the investigated age group is grim.
Less than one-fourth of the young people were already using spectacles (or other forms of refractive correction) at about the time of check-up. The rest were newly diagnosed at the tertiary OPD, reflecting deficiency in visual services at peripheral health establishment level for the detection and management of refractive errors and counselling regarding spectacle compliance.
What this study adds to existing knowledge?
The present study has put forward a more logical classification of refractive errors based on whether rays from infinity converge in front or behind the retina. Myopic errors thus included myopia, myopic astigmatism (simple and compound), and hypermetropic errors thus included hypermetropia and hypermetropic astigmatism (simple and compound). There is need to strengthen school screening programmes, vision centres and secondary eye care centres, and equip them with proper refraction facilities in order to serve the needs of the young people with refractive errors and further to prevent consequential amblyopia. In addition, there is need to establish a network of trained opticians and counsellors in order to provide correct spectacles and encourage spectacle-use among those who require it to help achieve Vision 2020 norms.
|
v3-fos-license
|
2020-07-09T09:15:49.565Z
|
2020-07-04T00:00:00.000
|
222241968
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.33394/jk.v6i2.2316",
"pdf_hash": "3368b374744cbc19cb32927086a268cabb8c939f",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:853",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "ab7cdca40b9bd552db717d3af4d80e607aee5af0",
"year": 2020
}
|
pes2o/s2orc
|
Comparing Orphans' Hope and Loneliness as Lifelong Learners in Tanjung Barat Orphanage South Jakarta
This research aims to compare the loneliness and hope among orphans in the Tanjung Barat orphanage, South Jakarta. The research method used a descriptive qualitative with a case study method. Data collection techniques were interviews, observations, and filling in a simple questionnaire. The sample used 36 children of Tanjung Barat orphanage consisting of elementary school, junior high school and senior high school students. This was done on site and compared to the literature that had been previously established. Comparing loneliness and hope among orphans aimed to identify and find ways of mapping loneliness and hope among orphans who had been explored and investigated empirically, to find out the comparison and contrast with the mapping of loneliness and hope expressed among them, and to find examples of self-assessment to evaluate and encourage the mapping of their loneliness and expectations among them and present them to caregivers, parents and professionals. By comparing their loneliness and hopes they can actively engage in social interaction between themselves, others, and improve their personal, welfare and life skills. Article History Received:14-02-2020Januari Revised: 24-02-2020 Published:04-07-202
Introduction
There are 36 orphans-aged between 6-18 years, 13 boys and 23 girls who live in Tanjung Barat Orphanage of Badan Sosial Darma Kasih Gereja Kristen Pasundan, which is called the foundation of YBSDK GKP Tanjung Barat in South Jakarta. They all are students. 17 of whom attend elementary school, 10 in Junior High school, and 11 in Senior High school or vocational school. Most of them come from West Java, Kuningan, 4 children from Papua and others from around Jakarta. They live in orphanage for around 4 months until 5 years. As the number of broken families increases due to economic challenges and socio-cultural changes, so does the number of them sent to live there. Their healthy physical, mental, psychological and social development depend on the establishment of mutual love and care between a founder who is also the preacher of Gereja Kasih Pasundan Tanjung Barat church, seven caretakers, and the orphans themselves, in the orphanage. However, children may need protection as a result of poverty, family problems, parents' physical, psychological or mental deficiencies, the death of a spouse, neglect or abuse, teenage marriages or extramarital pregnancies (Durualp & Cicekoglu, 2013). Besides, they also didn't get an effective treatment. Therefore, prevention and early intervention is crucial and needed. emotional difficulties that could place them at a higher risk for anxiety and depression (Gallegos, Rodríguez, Gómez, Rabelo, & Gutiérrez, 2012). Thus, they need to support from peers should be increased. It was found that the orphans with significant social support resources and activities, family and peers support positively, constructive words, feel more hopeful. Peers relations gain importance starting with their friendship, strive to form peer groups with whom they can share their inner-world and problems, encourage each other and have fun. An orphan's psychological state is significantly correlated as well with their hope as positive inner self. Again, Hope is an ability that is competent and responds to something. Hope is a type of psychological and spiritual satisfaction. Moreover, having hope is an experience of a sense of purpose and meaning in life and a feeling filled with infinite possibility in orphanage' lifestyle. Hence, the orphans are needed to support a close relationship between their positive religious coping style and social support among peers and others. Besides, Strong social support can improve positive religious coping styles as positive activities among them. The orphans can more effectively address the social interaction, improve self-care ability, adhere to the treatment of having positive emotion, less loneliness and reduce the incidence of complications to improve their quality of life through self-regulation of having hope.
The loneliness is a painful emotional experience that affects children's current quality of life and represents a developmental risk for their future. It signals the existence of a failure in the valued area of interpersonal relationships. Loneliness does not mean that children do not have friends and social networks. However, it means that they feel excluded and socially alienated. Loneliness is a subjective experience that reflects a mismatch between children's needs and their social environments. The study of loneliness is in fact the study of children's interrelations, including their self-perceptions in terms of how the children view others and themselves, how others view them, and how they feel about these perceptions and conceptions (Margalit, 2012).
Loneliness refers to feelings and thoughts of isolation and being disconnected from others and is a cognitive appraisal of social relationships. Individuals who report loneliness may have a large social network, but they are not satisfied with the interactions they have with those around them. In the general population, greater loneliness was associated with poor psychological well-being, including increased depressive symptoms and increased hopelessness (Ekas, Pruitt, & McKay, 2016). Moreover, loneliness according to Weiss (Victor & Yang, 2012) who argued that loneliness was a relational deficit: the lack of relationships of the desired quantity and/or quality. Loneliness is also defined as a subjective feeling of dissatisfaction with social relationships (Montoliu, Hidalgo, & Salvador, 2019).
UCLA Loneliness Scale was used to evaluate loneliness. This scale was developed by Rusell et al. in 1980 and its reliability and validity study was conducted in 1989 by Demir. The scale consists of 20 questions and is a likert-type scale. For each answer points of 1, 2, 3, and 4 can be obtained. The lowest score is 20 and the highest score is 80. A high score of the scale is considered as an indicator of an individual's more intense feeling of loneliness (Çağan & Ünsal, 2014).
Kniççi (Ören, 2012) reported that children who live in an orphanage experience feelings of fear, despair, and insecurity, and stressed that anxiety levels in these children were higher than anxiety levels of children who were living with their parents. Biyikh (Ören, 2012) found that children living in orphanages were behind their peers in terms of mental development, integration, socialization, responsibility, language development, and independent activities compared to children living with their parents. He also emphasized that this backwardness in the general development of children growing up in orphanages results from their spending the first years of life loveless and that living conditions in the orphanages reinforce these negativities.
In his study, Kutlu (Ören, 2012) reported that the loneliness levels of adolescents living in an orphanage were statistically significant according to whether or not their parents were separated, age of admission to the orphanage, whether or not they had a sibling in the same orphanage, length of residence in the orphanage, whether parents visited and if so how often, whether there were staff to help with loneliness when needed, and other factors such as the adolescents' academic performance at school, expectations for the future, and perceptions of the attitude of orphanage staff.
The concept of hope originated from the background of religion and philosophy. In ancient times, hope was a pejorative word. People often thought hope was empty and worthless. In ancient Greece, hope was regarded as a neutral concept that did not involve any positive or negative emotion. However, in the Bible, hope had the meaning of trust, faith and promise. In the 20th century, the German philosopher Ernst Bloch first placed hope into the core concepts of philosophy and redefined the meaning of hope from the perspective of anthropology and ontology in his book Das Prinzip Hoffnung. Miller et al. (Lu & Cui, 2016) explored the meaning of hope based on the nature of hope and etymology and described hope as a series of expectations associated with a good status for some individuals. Hope is an ability that is competent and responds to something. Hope is a type of psychological and spiritual satisfaction. Hope is an experience of a sense of purpose and meaning in life and a feeling filled with infinite possibility in life. Snyder (Lu & Cui, 2016) later proposed the newest concept of hope. He thought hope is one's thinking and behavior disposition. Hope is derived from acquired learning. Hope is not only a cognitive characteristic but also a dynamic state. Hope is based on the target from which it cannot be separated. The hope theory model described above mainly includes three elements: target, path of faith and intention to faith.
Hope involves looking toward the future with a sense of positive expectation and intentionality. It provides a sense that one has a future and enables coping with events in the present while supporting the individual to use crises as opportunities for growth (Barut, Dietrich, Zanoni, & Ridner, 2016). Hope has received increasing attention as a variable that may promote psychological well-being. Hope has traditionally been considered strength of character that is part of the engaged life. In his theory of hope, Snyder (Ekas et al., 2016) argues that human behavior is goal-directed and that goals are fundamental to hopeful thinking. Hopeful thinking consists of an individual's perceived ability to generate ways of reaching goals (pathways) as well as their perceived ability to use these pathways to reach their goals (agency). Thus, agency is the motivational component of hope and reflects an individual's intention to act upon the pathways generated. Individuals who engage in elevated levels of both argentic and pathways thinking are typically referred to as high-hope people. Hope is generally measured and conceptualized as a dispositional characteristic and measured using trait measures; however, hope can also fluctuate in response to different situations. It is important to note that while optimism and hope are both in the realm of positive psychology and appear to be similar constructs, they are only modestly related. Optimism reflects an individual's general expectancies in life whereas hope refers to goal-directed thoughts and actions.
According to the research (Barut et al., 2016) hope was defined as having future orientation or positive expectation of something in the future. This aligned with participants' definitions of hope. Hope means to have something waiting. Not necessarily waiting, but to have. When I think of 'hope' the word, I think of future. You have a grasp for the future. You feel like it's going to occur no matter what. It's not something that you can run low on. Hope means a lot because everybody needs to hope in some-thing or believe in something. I want to be hopeful that I have a bright future myself, even if it's not going to be a big event or anything. Hope gives people a reason to do something. What hope means for me is a reason to live. Having a future orientation was valued, even though it was not experienced by all participants. Hope provided meaning and motivation to keep trying despite obstacles encountered.
Facing numerous pressures in and out of orphanage, orphans tend not to be at ease, which results in the emergence of many psychological problems. These problems result in the decline of the quality of life and survival hope of orphans, seriously affecting the orphan's physical and mental health. Therefore, it is important to improve the quality of life of orphans by increasing the level of hope, which is not only related to the individual and their surroundings but also the development of their future.
Research Method
This present research basically aims to compare loneliness and hope among orphans at the orphanage of Tanjung Barat, South Jakarta. By this point of view, the researcher carry out a qualitative case study, with data collection techniques through interviews, observations, and filling in a simple questionnaire. The sample used was 36 children of Tanjung Barat orphanage consisting of 36 children of elementary school, junior high school and senior high school. These were conducted on-site and compared with the previously established related literature. By comparing of loneliness and hope among orphans is aimed to recognize and discover in what ways of mapping of loneliness and hope among them have been empirically explored and investigated. This research was also to find out the comparison and contrast by mapping of loneliness and hope have expressed among them, and to discover any examples of self-assessment to evaluate and encourage their mapping of loneliness and hope among them and present them to the caretakers, parents and professionals. By comparing their loneliness and hope can be actively involved in social interaction among them, others, and improve their self-improvement, well-being and life skills.
Finding and Discussion
Participants were taken from the orphans of Tanjung Barat orphanage. They are the children from elementary school until senior high school. In addition, details of the study were shared to the caretakers of the orphanage, the researches' university. Orphans who expressed interest in the study were provided with further information about the study. After agreeing to participate, they received some papers to complete all of the questionnaires. They first read and signed the informed consent and then completed questionnaires pertaining to demographics as well as personal characteristics and education background. Here are the results:
Picture 1. The Comparison Between Willpower and Way Power
There are eleven orphans who have way power more than willpower. Two orphans have both same result way power and willpower. And there are twenty three orphans have willpower more than their way power. Study shown that the teenagers orphans who had more life experiences, more educated and more life purposes chose to tolerate with their life. They had more way power than just willpower. However, they still needed supports from adults (parents, relatives, teachers, caretakers, friends and society). On the other hand, children who need their adults for their caring, love and supports chose willpower more than way power.
For the results of loneliness, we used different theory based indicators to evaluate orphans' subjective feelings of loneliness. The question used as a basis for the comparison of other sets of indicators was a dichotomously coded (no/yes) question regarding whether the respondent had experienced either frequent or constant feelings of loneliness during past year. The other two single questions used as loneliness indicators were the number of good friends i.e. "How often do you socialize with others?" and subjective satisfaction with his/her own existent personal relationships, i.e. "How satisfied are you with your personal relationships?" For the other question, the respondents were asked to answer the questions and answer 0 (dissatisfied) and 10 (satisfied). To measure social and emotional loneliness, we developed a Finnish version of the UCLA Loneliness Scale, version 3 [28]. In Russell´s version of UCLA, the number of items has been reduced to ten, and the wording of the items and the response format has been simplified. The instruction was "Indicate how often each of the statements below is descriptive of you" and the responses were 0 = never; 1 = rarely; 2 = sometimes; and 3 = often.
In order to explore the consequences of loneliness, we asked the respondents to choose every appropriate variable on a list prepared based on previous studies on the consequences of loneliness. The question was "Has loneliness caused you any of the following issues during your life?" and the consequences listed were illnesses, depression, lack of initiative, fear of future, isolating home, social fears, fear of lack score of school, friendship, poverty, traffic jam, and fear of rejection from society. Based on gender, boys shown incredible result that they were not loneliness more than girls. On the contrary, the girls shown that they had low loneliness, but they still felt loneliness. Table 4. Based on Total Score Never 103 Point 6 and 13 more than others Always 101 Point 9 is the most Picture 5. The Answer Based on Total Score Based on the table above, the orphans answered "NEVER" to the questions number 6 ("How often do you feel that you have a lot in common with the people around you?"), and 13 ("How often do you feel that no one really knows you well?"). It means that their relationship in the orphanage were satisfied, cared and loved. They never felt that they were abandoned. Moreover, most of them answered, "ALWAYS" in question number 9 ("How often do you feel outgoing and friendly?"). The answer expressed that they had strong personalities, although they lived in orphanage.
Descriptive statistics are presented in those tables above and charts correlations between study variables are shown in those tables and also charts. Although the willpower and way power subscales as well as depressive loneliness were significantly correlated, there was no evidence of multi collinear between the variables. Will power and way power were associated with decreased loneliness, increased friend and family support. Conversely, hope of willpower and way power was only associated with decreased loneliness. Only good relationship among the orphans themselves and the caretakers support was associated with decreased depressive symptoms, whereas both friend and family support were associated with decreased loneliness. Next, we examined whether any demographic variables (e.g., child age, level of education, their ethnicity, the gender, or personalities were related to the study variables to determine whether they needed to be included as covariates in subsequent model.
Based on our results, among the indicators of hope and loneliness, the least predictive value for self-reported negative consequences of loneliness was the single question concerning the number of good relationship or friendship and good personalities. These indicators had the least significant regression loadings into the sets of negative consequences of loneliness. Moreover, Consistent with the study hypotheses, hopeful thinking and tend of having willpower and way power was simultaneously associated with decreased loneliness and increased relationship support. These findings support previous cross-sectional studies that high-hope individuals report less loneliness and increased social support. Higher hope individuals also tend to report positive relationships.
In the current study hope, as compared to hope willpower and way power, was significantly associated with personal outcomes. In hope theory, way power refers to the ability to generate ways of reaching goals whereas willpower refers to an individual's perceptions of whether they can use those imagined way power. Willpower is believed to be the motivational component of hope theory and consists of thoughts such as "I can meet my purposes." On the other hand, having hope of willpower and way power made orphans face numerous stressors and barriers, reduce loneliness to manage their important life purposes for better wellbeing. Having more hope means more positive interventions in personal and interpersonal that correlated with decrease their loneliness.
Conclusion
The findings of this study highlight the importance of having the hope to cope the feelings of social isolation and also loneliness that orphans commonly report. Given that increased hopeful thinking was associated with less loneliness, the construct of hope should be given more attention in interventions that are aimed at improving personal wellbeing.
Study has shown that designed to generate hopeful thinking, the way power and willpower where individuals practiced about a time when a negative event led to unexpected positive outcomes can increase levels of happiness. Increasing hope may be particularly important in aiding with improvement during the good relationship for orphans with less of loneliness levels.
Suggestion
This research is to provide an understanding of the contents of the questionnaire to the orphanage children, but it can be overcome with language that is simpler and fits their daily lives. The other obstacles were to gather the data, like the orphans' schedule for interviewing. Besides, there were some departments, foundations, schools came there to have some researches, had charities, and some social programs, and so on. Besides their different schedules in school time among them, therefore, we had to arranged the schedule well depend on their timing and readiness.
Based on the obstacles above, researcher suggests that teachers, parents and professionals need to explore the current state of knowledge in this field and so children in orphanage from different age groups or grades in schools, were all included to provide more comprehensive and valuable insights into this unexplored area. Besides, the children there need more engagement with the adults to concern, protect and give love as the hope for them.
|
v3-fos-license
|
2024-01-28T16:16:55.953Z
|
2024-01-26T00:00:00.000
|
267297689
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2223-7747/13/3/368/pdf?version=1706267938",
"pdf_hash": "69ce78dc7d142b7b2c92d8ddf596655a98630f43",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:854",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "e8822cec077df9e983728abfbb7b50d13ef13540",
"year": 2024
}
|
pes2o/s2orc
|
Spider–Plant Interaction: The Role of Extrafloral Nectaries in Spider Attraction and Their Influence on Plant Herbivory and Reproduction
Spiders, abundant and diverse arthropods which occur in vegetation, have received little attention in studies investigating spider–plant interactions, especially in plants which have extrafloral nectaries (EFNs). This study examines whether spiders attracted to EFNs on the plant Heteropterys pteropetala (Malpighiaceae) function as biological protectors, mitigating leaf herbivory and positively impacting plant fitness, through manipulative experiments. Spiders are attracted to EFNs because, in addition to consuming the resource offered by these structures, they also consume the herbivores that are attracted by the nectar. At the same time, we documented the reproductive phenology of the plant studied and the abundance of spiders over time. Our results revealed that the plant’s reproductive period begins in December with the emergence of flower buds and ends in April with the production of samarids, fruits which are morphologically adapted for wind dispersal, aligning with the peak abundance of spiders. Furthermore, our results demonstrated that spiders are attracted to plants that exude EFNs, resulting in a positive impact on reducing leaf area loss but with a neutral effect on protecting reproductive structures. By revealing the protective function of spiders’ vegetative structures on plants, this research highlights the ecological importance of elucidating the dynamics between spiders and plants, contributing to a deeper understanding of ecosystems.
Introduction
Interactions between spiders and plants can provide evidence of the existence of facultative mutualistic relationships, influencing the structure of ecological communities and the fitness of plants [1][2][3][4].Understanding the function of each organism in this interaction will allow for a better understanding of the evolutionary paths that lead to mutualism.Spiders are considered excellent predators, and, when they live on plants, they forage for their prey.In addition, spiders occasionally use plant resources to supplement their insect-based diet [5][6][7][8].Among these plant-supplied foods are extrafloral nectaries (EFNs) [8,9].Thus, spiders provide various services to plants, including protection against leaf and flower herbivores, consequently reducing leaf herbivory and/or increasing seed production [1,[10][11][12], acting as important biological defenders [13].Although spiders are traditionally considered predators in ecosystems, these organisms also frequently feed on the products of extrafloral nectaries [14].Biotic defences have a mutualistic character whereby resources such as plant EFNs are exchanged for spider services and are mediated Plants 2024, 13, 368 2 of 12 by the interests, costs, and benefits for both groups [8].The costs and benefits of this association can vary depending on various factors, such as the identity of the spider family [15], the season of EFN activity, and even the presence of competitors such as ants [1,2,16].However, the relationship between spiders and plants can also be negative, as spiders consume or interfere with pollinators, leading to a reduction in plant fitness [17,18].
EFNs are nectar-producing structures not associated with pollination and found in various parts of plants, such as leaves, stems, and flower bud calyxes [19,20].These nectaries produce a solution rich in water, sugars, amino acids, proteins, and lipids [21,22].EFN-bearing plants are common in the Cerrado biome [23,24].A study by Nahas et al. [14] found the presence of fructose from EFNs from eight different plant species in 39 spider species from seven families.This indicates that feeding on EFNs is advantageous for spiders, as nectar is an excellent source of energy [1,[25][26][27].
Although spiders are among the most abundant and diverse arthropods in vegetation, studies on their interactions with plants are relatively scarce, and the literature on integrative studies on the relationship between spiders and extrafloral nectaries is still limited [28,29].In this context, the aim of this study is to determine whether or not the presence of extrafloral nectaries on Heteropterys pteropetala A. Juss.(Malpighiaceae) is an attractive factor for spiders and whether these animals act as biological protectors.Our main hypothesis is that spiders are attracted to the extrafloral nectaries on this plant and, consequently, that their presence reduces leaf damage and increases the reproductive success of H. pteropetala (Table 1).Spiders can reduce damage to leaves and increase the reproductive success of a plant by consuming the herbivores present on it.In addition, we describe the phenology of the different reproductive phases of H. pteropetala and the abundance of spiders found over time.
Results
The reproductive period of the H. pteropetala in our study began in December 2021 with the emergence of the first floral buds and ended in May 2022 with the collection of the samarids.The reproductive peak for the floral buds in both groups (EFNs active and EFNs inactive ) was in February (Rayleigh test for buds with EFNs active z = 0.94; p < 0.001; buds with EFNs inactive z = 0.91; p < 0.001) (Table 2, Figure 1a,b).The inflorescences on the plants EFNs active reached their peak in February (Rayleigh test: z = 0.943; p < 0.01), while the EFNs inactive plants reached their inflorescence peak in March (Rayleigh test: z = 0.932; p < 0.01) (Table 1, Figure 1c,d).The peak of samarid production occurred in April for both manipulations (Rayleigh test for samarids with EFNs active z = 0.957; p < 0.01; samarids with EFNs inactive z = 0.952; p < 0.01) (Table 1, Figure 1e,f).Spider abundance was Plants 2024, 13, 368 3 of 12 higher in January and February, with peak abundance being observed in February in both manipulations (EFNs active z = 0.436; p < 0.01; and EFNs inactive z = 0.366; p < 0.03) (Table 1, Figure 1g,h).There was an absence of spiders on the EFNs active plants in August, and the lowest abundance of spiders was also recorded in August for the EFNs inactive plants.Spider abundance was higher in the EFNs active plants than in the EFNs inactive plants (χ 2 = 9.0681; df = 1; p = 0.0026, Figure 2).A total of 157 spider specimens were counted, with 97 individuals found on the EFNs active plants and 60 found on the EFNs inactive plants (Table 3).The EFNs active plants had a higher number of spiders from the families Thomisidae, Araneidae, and Salticidae compared to the EFNs inactive plants (Table 3).However, only the Cheiracanthiidae family had a higher number of representatives in the EFNs inactive plants compared to the EFNs active plants (Table 3).The families Theridiidae and Oxyopidae showed no significant difference in the number of individuals between the treatment and control plants.A total of 470 insects were seen to be associated with H. pteropetala in both treatments, with the orders Lepidoptera (larval stage) and Hemiptera being the most abundant, especially in plants with EFNs inactive (Table A1).Spider abundance was higher in the EFNsactive plants than in the EFNsinactive plants = 9.0681; df = 1; p = 0.0026, Figure 2).A total of 157 spider specimens were counted, w 97 individuals found on the EFNsactive plants and 60 found on the EFNsinactive plants (T showed no significant difference in the number of individuals between the treatment and control plants.A total of 470 insects were seen to be associated with H. pteropetala in both treatments, with the orders Lepidoptera (larval stage) and Hemiptera being the most abundant, especially in plants with EFNsinactive (Table A1).The EFNsinactive plants showed less loss of leaf area (χ 2 = 35.646;df = 1; p = 0.0019) and less variation in herbivory within the manipulated groups (χ 2 = 11.609;df = 1; p = 0.0065), and this variation persisted over the months studied (χ 2 = 13.206;df = 1; p = 0.0001) compared to the EFNsinactive plants.There was also variation in the herbivory between different plant manipulations in December (2021), January, February, and April (2022) (Figure 3).In August and September, the plants were leafless, which resulted in a lack of herbivory, and they only began to sprout again in September (Figure 3).The samarid/bud ratios, samarid/flower ratios, and seed weight did not differ significantly between the EFNsactive and EFNsinactive plants (Table 4).The EFNs inactive plants showed less loss of leaf area (χ 2 = 35.646;df = 1; p = 0.0019) and less variation in herbivory within the manipulated groups (χ 2 = 11.609;df = 1; p = 0.0065), and this variation persisted over the months studied (χ 2 = 13.206;df = 1; p = 0.0001) compared to the EFNs inactive plants.There was also variation in the herbivory between different plant manipulations in December (2021), January, February, and April (2022) (Figure 3).In August and September, the plants were leafless, which resulted in a lack of herbivory, and they only began to sprout again in September (Figure 3).The samarid/bud ratios, samarid/flower ratios, and seed weight did not differ significantly between the EFNs active and EFNs inactive plants (Table 4).
Discussion
Our study suggests that spiders are attracted to plants that exude EFNs and that this attraction has a positive effect on reducing leaf area loss for H. pteropetala plants in the Brazilian Cerrado, confirming the first and second hypotheses.Thus, we have demonstrated that the presence of active nectaries acts as a source of attraction for spiders that can act as efficient biological protectors.In other words, the nectar produced in EFNs can complement the diet of arthropod predators [26,27] and consequently attract these organisms, leading to a reduction in herbivory [30].
In particular, we found that active EFNs more efficiently attract spiders belonging to the Thomisidae, Araneidae, and Salticidae families.Similar results were obtained by Stefani et al. [31], who, after isolating shrubs of Palicourea rigida Kunth (Rubiaceae) from ants and measuring the recruitment of spiders visiting post-floral nectaries, found representatives of the Thomisidae and Salticidae families to be among those most abundant.This abundance of spiders may be associated with both the attractiveness of the nectar and the absence of ants.Supposedly, the great challenge faced by different species of spiders that use EFNs as a source of complementary food is to break through the defenses promoted by ants [2,16].The greatest abundance of spiders in our study was recorded during the reproductive period of H. pteropetala in both manipulations (Figure 1), although the greatest abundance was observed in the EFNs active plants (Figures 1 and 2; Table 2).This increase in abundance (in both manipulations) may have influenced neutral protection rates during this period, refuting our third hypothesis.The presence of reproductive structures, such as buds, flowers, and fruits (forming samarids), can provide spiders with a wide variety of shelters, opportunities to find conspecifics, anchoring points for webs, and opportunities to use different foraging methods, even on plants with inactive EFNs [11,32].
Of all the spider families found, the Thomisidae family was the most abundant (Table 3), found on the vegetative and reproductive parts of H. pteropetala.Thomisidae spiders are known to be flower spiders, as they often camouflage themselves in flower petals or structures, waiting for pollinating prey to arrive [33].Thus, spiders from the Thomisidae family are strongly associated with the reproductive period of their host plants.Studies have shown that, when individuals from this family are present in the reproductive parts of a plant, they can have positive, neutral, or negative effects on its reproduction.For example, Romero and Vasconcellos-Neto (2003) demonstrated positive effects on the reproduction of Trichogoniopsis adenantha (DC) (Asteraceae) in the presence of Thomisidae spiders, as the plants with the presence of these spiders produced more seeds compared to the plants without them [34].Neutral effects, for example, were presented by Gavini et al. (2019), who studied interactions between the flowers of Anemone multifida (Ranunculaceae), their floral visitors, and Misumenops pallidus (Thomisidae).The authors observed that the presence of spiders did not reduce the number of floral visitors or the quantity and quality of the fruit and seeds formed [12].Finally, there is ample evidence of the negative effects that spiders have on the reproduction of their host plants.For example, in another study, the presence of Thomisidae spiders in Leucanthemum vulgare (Vaill.)Lam.(Asteraceae) flowers reduced the number of floral visitors and the time pollinators spent in the flowers, generating a cascade effect which resulted in a 17% reduction in fruit and seed formation [35].
Spiders from the Salticidae and Araneidae families were also more abundant on the EFNs active plants in our study.According to Jackson et al. (2001), Salticidae spiders may have the habit of feeding on nectar, indicating that nectar feeding is possibly a common behaviour in this family [27].Orb-weaving spiders, such as Araneidae (Table 3), may also have the habit of feeding on nectar from EFNs (as well as dismantling and rebuilding their webs at regular intervals, allowing them to build their webs where resources are most abundant) [36].For example, Nahas et al. [14] investigated the presence of fructose in the bodies of spiders that visit plants with EFNs in a neotropical savannah environment.In their study, the species Araneus venatrix (Araneidae), collected at night from Qualea grandiflora (Vochysiaceae) plants, showed the highest concentrations of fructose [14].Thus, araneids their webs to capture their prey on plants with EFNs and supplement their diet with nectar.The arrival of new herbivores on the plant can occur by air, meaning that the webs built on the plant capture these herbivores before they even reach the plant, reducing the damage caused by herbivory.
Unlike the other spider families found on H. pteropetala, the Cheiracanthiidae family was more abundant on the EFNs inactive plants than on the EFNs active plants, despite the fact that this family is known for its nectar consumption [36].As the representatives of the Cheiracanthiidae family were adults and in the oviposition period, they were possibly found in greater numbers on the EFNs inactive plants because these locations allowed them to avoid the presence of competing spiders and probable predators of their eggs and young.In addition, spiders during egg sac care reduce their consumption of prey [37], thus affecting any positive interaction with the plant.
In summary, spiders are attracted to EFN nectar, confirming the existence of mutualism in the form biotic protection between spiders and H. pteropetala.However, the positive relationship is limited to leaf structures, while, in the reproductive parts, the association found was neutral.Thus, the predatory activity of spiders on reproductive structures suggests a commensal role in which one species (spider) benefits from the interaction, but the other (plant) is neither benefited nor harmed.
The Study Site and Species of Plant
This study was conducted from December 2021 to November 2022 at the Ecological Reserve of the Clube de Caça e Pesca Itororó de Uberlândia (18 • 59 ′ S and 48 • 18 ′ W, WGS84 Datum, ~640 ha), in the state of Minas Gerais (MG), Brazil.The reserve's vegetation comprises various savanna physiognomies, with trees reaching up to 8 m in height [38].The mean monthly rainfall ranges between 0 and 360 mm, and the mean monthly temperature is between 20.0 and 25.5 • C, with a dry season between May and September and a rainy season between October and April [3,39].
The plant species studied, Heteropterys pteropetala, is a shrub approximately 2 m tall, with two extrafloral nectaries (EFNs) at the base of each leaf (Figure 4a), at the base of the pedicel of the flower buds, and on the bracts of the inflorescences [40].The inflorescences are terminal panicles with pink flowers (Figure 4b) and are zygomorphic, with five petals and five sepals, and, at the base of each sepal, there are two elaiophores (oil glands), totalling between eight and ten glands per flower [41].Each flower can produce up to three samarids (a fruit morphologically adapted for wind dispersal) (Figure 4c) [42].H. pteropetala is dependent on cross-pollination for fruiting and is an important species for studies of ecological interactions due to the diversity in its guild of floral visitors.The presence of organisms that take part in pollen transport increases the fruiting and reproductive success of the species [42].
samarids (a fruit morphologically adapted for wind dispersal) (Figure 4c) [42] pteropetala is dependent on cross-pollination for fruiting and is an important specie studies of ecological interactions due to the diversity in its guild of floral visitors.presence of organisms that take part in pollen transport increases the fruiting and re ductive success of the species [42].
Experimental Design
To test hypotheses I, II, and III (see Table 1), we isolated the plant against the pres of ants.It is known that ants are also attracted to EFNs, making them important com tors for spiders.According to a study carried out by Lange et al. [16] with nine diffe plant species with EFNs in a neotropical savannah area, a negative spatial/temporal e of spider abundance was observed in the presence of ants.In addition, Stefani et a observed that spider species' richness was significantly higher in the absence of ants hough the reverse was not true, possibly due to the different species composition o ants and spiders found and, consequently, the different types of interactions betw them.Thus, the absence of ants in this study was necessary so that these organisms wo not influence our results.Non-toxic resin (entomological glue-Tanglefoot ® ) was ad to the base of the trunk of all the plants to prevent ants from accessing the plant.Al structures, such as grasses, that could serve as a bridge for the ants to access the pl were removed.We then carried out two different manipulations on the H. pterop plants in a natural environment: (I) EFNsactive plants were individuals with active extr ral nectaries (n= 20); and (II) EFNsinactive plants were individuals with inactive extrafl nectaries (n = 20).The plants in the EFNsinactive group underwent a process of ename
Experimental Design
To test hypotheses I, II, and III (see Table 1), we isolated the plant against the presence of ants.It is known that ants are also attracted to EFNs, making them important competitors for spiders.According to a study carried out by Lange et al. [16] with nine different plant species with EFNs in a neotropical savannah area, a negative spatial/temporal effect of spider abundance was observed in the presence of ants.In addition, Stefani et al. [2] observed that spider species' richness was significantly higher in the absence of ants, although the reverse was not true, possibly due to the different species composition of the ants and spiders found and, consequently, the different types of interactions between them.Thus, the absence of ants in this study was necessary so that these organisms would not influence our results.Non-toxic resin (entomological glue-Tanglefoot ® ) was added to the base of the trunk of all the plants to prevent ants from accessing the plant.All the structures, such as grasses, that could serve as a bridge for the ants to access the plants were removed.We then carried out two different manipulations on the H. pteropetala plants in a natural environment: (I) EFNs active plants were individuals with active extrafloral nectaries (n= 20); and (II) EFNs inactive plants were individuals with inactive extrafloral nectaries (n = 20).The plants in the EFNs inactive group underwent a process of enamelling all the nectaries, blocking them, and preventing the release of nectar-in other words, making them inactive.In the plants in the EFNs active group, glaze was also applied to the abaxial part of the leaf, next to the extrafloral nectary, allowing the normal release of nectar.Weekly inspections were carried out on all the plants to check the integrity of the nectary obstructions in the EFNs inactive plants, as well as the entomological resin at the base of the trunk in both manipulations, to prevent ant access.To describe the reproductive phenology of H. pteropetala, all flower buds, inflorescences, and samarids were quantified weekly during the plants' reproductive period.
To test hypothesis I, all the experimental plants were inspected weekly; the spiders found were photographed and quantified after a visual search of the entire bush.The branches were also shaken over a white tray, so that, if any animals were not found during the visual sweep, they would be on the tray for quantification.After the procedure, all the spiders were placed back on the plant.
To test hypothesis II, herbivory rates were measured monthly on five leaves of each plant in both manipulations.Initially, the five leaves of each plant were marked at the initial stage of expansion to monitor and record the loss of leaf area throughout the leaves' ontogeny, i.e., from budding to senescence.Herbivory was calculated from digital images analysed using the ImageJ software version 1.53, as performed by Calixto et al. [43].
Figure 1 .
Figure 1.Number of reproductive structures produced by Heteropoterys pteropetala and spider ab dance between December 2021 and January 2022 for the EFNsactive (blue, left) and EFNsinactive (yel right) groups: (a,b) floral buds; (c,d) inflorescences; (e,f) samarids; and (g,h) spider abundance row position represents the mean vector (µ), and arrow length represents the length of the m vector (r).
Figure 1 .
Figure 1.Number of reproductive structures produced by Heteropoterys pteropetala and spider abundance between December 2021 and January 2022 for the EFNs active (blue, left) and EFNs inactive (yellow, right) groups: (a,b) floral buds; (c,d) inflorescences; (e,f) samarids; and (g,h) spider abundance.Arrow position represents the mean vector (µ), and arrow length represents the length of the mean vector (r).
Figure 3 .Table 4 .
Figure3.Foliar area loss (mean ± SE/standard error) in Heteropoterys pteropetala from December 20 to November 2022.The response variable was herbivory; plant type (EFNsactive and EFNsinactive) w the predictor variable; months of the year and plant identification were the random variables.* significant difference between the manipulations (Tukey's post hoc: p < 0.05).
Figure 3 .
Figure3.Foliar area loss (mean ± SE/standard error) in Heteropoterys pteropetala from December 2021 to November 2022.The response variable was herbivory; plant type (EFNs active and EFNs inactive ) was the predictor variable; months of the year and plant identification were the random variables.* = significant difference between the manipulations (Tukey's post hoc: p < 0.05).
Figure 4 .
Figure 4.The studied plant, Heteropterys pteropetala, in the Cerrado sensu stricto at the Ecolo Reserve of Clube Caça e Pesca Itororó in Uberlândia, Minas Gerais, Brazil.(a) Thomisidae spid the abaxial surface of the leaf-note the white arrow indicating the pair of EFNs at the base o abaxial region of the leaf.(b) Inflorescences of a studied plant.(c) Samarids with the presence juvenile Thomisidae spider.
Figure 4 .
Figure 4.The studied plant, Heteropterys pteropetala, in the Cerrado sensu stricto at the Ecological Reserve of Clube Caça e Pesca Itororó in Uberlândia, Minas Gerais, Brazil.(a) Thomisidae spider on the abaxial surface of the leaf-note the white arrow indicating the pair of EFNs at the base of the abaxial region of the leaf.(b) Inflorescences of a studied plant.(c) Samarids with the presence of a juvenile Thomisidae spider.
Table 1 .
Overview of the hypotheses (H) and predictions tested in this study.EFNs inactive are the nectaries that were obstructed with enamel, while EFNs active are the nectaries without manipulation (see methodology).
Table 2 .
Circular statistics applied to reproductive phenophases and spider abundance in Heteropoterys pteropetala with EFNs active (n = 20) and EFNs inactive (n = 20) in a Cerrado area at the Ecological Reserve of Clube Caça e Pesca Itororó in Uberlândia, Minas Gerais, Brazil.The Rayleigh test was conducted with a significance level of 0.05.
Table 3 .
Abundance of spider families found in different manipulations of the plant Heteropoterys pteropetala.
Number of Individuals on Plants with EFNs active Number of Individuals on Plants with EFNs inactive X 2 p
nts 2024, 13, x FOR PEER REVIEW 4 o
Table 3 .
Abundance of spider families found in different manipulations of the plant Heteropoterys pteropetala.
Table 4 .
Productivity of Heteropoterys pteropetala in the presence and absence of spiders indicated by the ratio of samarids/buds, samarids/flowers, and seed weight.Values represent mean ± SE (standard error).
|
v3-fos-license
|
2018-04-03T01:29:46.670Z
|
2016-08-31T00:00:00.000
|
14577354
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/pd/2016/9539836.pdf",
"pdf_hash": "21dcfc20582d7add930d6e9aaeb8849a41ab707b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:855",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "a6affb7f5022678b71ae09eda4d86ce2f1ad03de",
"year": 2016
}
|
pes2o/s2orc
|
Parkinson's Disease and Homocysteine: A Community-Based Study in a Folate and Vitamin B12 Deficient Population
Background. Homocysteine (Hcy) levels were higher in patients with Parkinson's disease (PD). This could be partially explained by levodopa treatment. Whether untreated PD patients have higher Hcy levels is contradictory. Methods. A community-based study was conducted using a two-stage approach for subjects ≥ 55 years to find PD patients in 3 towns of Lüliang City. Blood samples were collected. Serum Hcy, folate, and vitamin B12 concentrations were measured. For each untreated PD patient, 5 controls were selected matched with age and sex to evaluate the relationship between Hcy levels and PD. Results. Of 6338 eligible residents, 72.7% participated in the study. 31 PD cases were identified. The crude prevalence of PD for people ≥ 55 years was 0.67%. Blood samples were collected from 1845 subjects, including 17 untreated PD patients. There was no difference for concentrations of serum Hcy, folate, and vitamin B12 between cases and controls (P > 0.05). In univariate and multivariate analysis, there was significant inverse relation between PD and current smoking (P < 0.05). No other factor was significant statistically. Conclusions. The prevalence of PD was comparable to earlier studies in China. Hyperhomocysteinemia was not a risk factor of PD, as well as folate and vitamin B12 deficiency.
Introduction
Parkinson's disease (PD) is the second most common neurodegenerative disease after Alzheimer's disease in the elderly [1]. The estimated number of individuals above age of 50 with PD in the world was between 4.1 and 4.6 million in 2005 [2]. As aging population grows, the number will double to between 8.7 and 9.3 million by 2030 [2]. However, the etiology of PD remains unknown after decades of research. The interracial rates of PD are the same among communities in a common environment [3]. Nevertheless, the prevalence ratios for blacks in different countries varied obviously [4]. Therefore, environmental factors may be more important than genetic factors. Epidemiological studies also indicate that environmental factors, such as cigarettes smoking, are involved in the development of PD [5].
Homocysteine (Hcy) is formed by demethylation of methionine. Hyperhomocysteinemia plays a role in the development of some diseases, such as cardiovascular diseases and neurodegenerative diseases [6][7][8]. In vitro and in vivo studies indicated that Hcy contribute to the pathogenesis of mesencephalic dopaminergic neuronal death occurring in PD [9]. However, available data from clinical studies were contradictory [9]. Hcy levels were 30% higher in PD patients compared to controls [10]. This could be partially explained by regular treatment of levodopa [11]. Investigating the Hcy levels of naive PD patients may give some clues on the causal relationship between Hcy and PD.
The population living in Lüliang City, Shanxi Province, China, had inadequate folate and vitamin B12 intake [12] which may lead to high prevalence of hyperhomocysteinemia [13]. Meanwhile, because of low Development and Life Index [14], we presumed that most PD patients were undiagnosed and untreated. Thus, it was a proper area to research the relationship between Hcy and PD. In addition, there was no study regarding the prevalence of PD in rural North China. Similar studies were performed in large cities, such as Beijing and Shanghai, and areas around them in China before. Parkinson's Disease Aim of this study was bifold: (1) to find PD patients and estimate the prevalence of PD in Lüliang City; (2) to examine whether untreated PD patients had higher Hcy levels and the association between hyperhomocysteinemia and PD risk by case-control method.
Population Description.
The study was performed in Lüliang City, Shanxi Province, China. Lüliang, a mountainous region, is located in the midwest of Shanxi Province which is an underdeveloped area of China. There are approximately 3,720,000 inhabitants, 80% of whom live in rural areas. 148 rural towns belong to Lüliang City, which can be divided into three categories (North, Center, and South) by orientation. Three townships (Kangcheng, Gaojiagou, and Caijiaya) from all three categories were selected as the target populations because of good cooperation of local governments. There was no tap water supply in the 3 towns and people all drank well water there.
Eligibility Criteria.
We obtained a census list for 2012 from the household registry department of each town. The study was restricted to residents at age of 55 years or older (birth before 1958-01-01). Residents living in a Lüliang City residence less than 2 months per year were excluded from the study.
General Study Design.
The study was performed from August 2012 to December 2012. A free medical consultation in local village health clinics was conducted to improve participation. About 10 days before the survey information on the opportunity to participate was published in local communities via leaflets and posters. Community workers contacted the residents via telephone calls or house visits one or two days before the evaluation to notify them of the time and place of examination. All respondents gave their written informed consent.
A two-stage community-based survey design was employed to detect PD patients. In the first stage, trained interviewers (doctors and senior clinical medical students) administrated a screening questionnaire [15] for PD in local health clinics. Demographic and medical information was also collected, including smoking, tea drinking, and alcohol drinking habits. If targeted residents did not appear in the local health clinics, interviewers visited their house once to administrate the questionnaire. Residents not at home during home visits were considered to have dropped out of the study. The questionnaire contains 9 questions. Residents who responded positively to at least 2 of the 9 questions were selected for the second stage. Sensitivity of the questionnaire was measured in a sample of 47 patients affected by PD; specificity was investigated in 217 outpatients free of parkinsonism. The sensitivity was 97.9% and the specificity was 73.7%. For reducing the false negative rate, subjects who gave positive answers to the question "Do your arms or legs shake?" entered into the second stage as well. We also asked the residents regarding whether levodopa was took and PD was diagnosed before, and subjects giving a positive answer of either question entered into the second stage.
In stage 2, a structured clinical workup comprising the Unified PD Rating Scale [16], a neurologic examination, and standardized history taking, was used to establish the diagnosis of parkinsonism and the classification of parkinsonism. Neurologists from First Hospital, Shanxi Medical University, performed the examinations in local health clinics or at home. Diagnosis of parkinsonism required the presence of bradykinesia and at least one of the following: muscular rigidity, rest tremor, or postural instability [17]. PD was diagnosed according to the United Kingdom Parkinson's Disease Society Brain Bank clinical diagnostic criteria [17]. The staging of PD was assigned according to the Hoehn and Yahr scale [18]. Non-PD parkinsonism including vascular parkinsonism, drug-induced parkinsonism, multiple system atrophy, progressive supranuclear palsy, and other parkinsonism types was diagnosed according to descriptions published previously [19].
Blood samples were collected from subjects who agreed. Approximate 10 mL venous fasting blood samples were obtained from subjects who appeared in the local village health clinics in the early morning hours after a 12 h fast by trained nurses in the field. For PD patients who could not come to the field, the fasting blood samples were collected at home. The serum samples were separated within 30 min of collection by centrifugation (4 ∘ C, 20 min, at 2000 RPM) in the field or at home and transported in a cooler to the First Hospital, Shanxi Medical University, Taiyuan, China, where the samples were stored at −70 ∘ C until analyzed. Serum total homocysteine (Hcy), folate, and vitamin B12 were measured at the First Hospital, Shanxi Medical University, Taiyuan, China. Hcy was measured by an enzyme cycling method using a Beckman UniCel DxC 800 Synchron Clinical System Analyzer (Beckman Coulter, Inc.). Serum folate and vitamin B12 were measured simultaneously by a radioassay kit (MP Biomedical, Inc.).
Among subjects with blood samples collected, for each untreated PD patient, 5 controls were selected matched with age (±2 years) and sex randomly.
Statistics.
For case-control study, current smokers were defined as subjects who smoked during investigation or quitted smoking for less than 3 months. Current alcohol drinkers were defined similarly. Serum Hcy, folate, and vitamin B12 were categorized into binary variable according to the medians.
Prevalence rates were calculated for PD and expressed as percentages. The 95% confidence intervals (CI) were obtained. In addition, prevalence rates standardized to the age composition of the 2010 China census were also calculated. Linear-by-linear association chi-square test was used to test significance of linear relationship between prevalence and age. Differences between categorical variables were calculated using Pearson's chi-square test. Student's -test or Mann-Whitney test was used to compare continuous variables depending on the distribution types. Univariate logistic regression was used to estimate the odds ratios (ORs) and 95% CI for PD. Multivariate logistic regression was used
Results
There were 6338 eligible residents living in the three towns in Lüliang City. Of these, 4605 (72.7%) were screened in stage 1. Age and sex distribution for the target population, participants, and participation rates in this study is presented in The crude prevalence of PD for people aged 55 years or older was estimated to be 0.67% (95% CI 0.46-0.96). The age-standardized prevalence for people aged 55 years or older was 0.67% after being adjusted to the 2010 China census. The age-and sex-specific prevalence of PD is shown in Table 2. The prevalence of PD increased with advancing age ( = 0.00). There was no significant difference for PD prevalence between men and women ( = 0.08).
Blood samples were collected from 1845 subjects, including 17 untreated PD patients. Prevalence of hyperhomocysteinemia, low folate levels, and low vitamin B12 levels was high [20]. For each untreated PD patient, 5 controls were selected matched with age (±2 years) and sex. General characteristics of cases and controls are presented in Table 3. There was no difference for concentrations of serum Hcy, folate, and vitamin B12 between cases and controls ( = In univariate analysis there was significant inverse relation between PD and current smoking (Table 4). Current smokers were found to be less likely to have PD (OR = 0.212; 95% CI 0.067-0.674; = 0.009). High homocysteine levels ( = 0.070), low folate levels ( = 0.427), low vitamin B12 levels ( = 0.427), current alcohol drinking ( = 0.999), tea drinking ( = 0.999), and pesticide exposure ( = 0.999) were found not to have a significant relation with Parkinson's disease in this population. In multivariate analysis, similar results were obtained (Table 4).
Discussion
In China, the most widely accepted prevalence study on PD was conducted in Beijing, Xian, and Shanghai, which were the largest cities of China [21]. However, regional development differences were obvious in China [14]. In addition, environmental factors were important in the pathogenesis of PD. The present population had different characteristics from those in large cities, such as high prevalence of hyperhomocysteinemia. Investigating the prevalence in this area was necessary.
This study employed a two-stage community-based design to estimate the prevalence rate of PD in a Chinese population, which was popular in epidemiological studies on PD [19,22,23]. The differences among those studies were chiefly regarding how to screen PD in stage 1 and we utilized a questionnaire with high sensitivity (97.9%) to perform it. The questionnaire also showed high sensitivity (100%) in community-based study [24]. There was a low chance to miss PD patients in stage 1 in our study. A total of 31 cases were ascertained to have PD, which generated a crude [25], Ilan county, Taiwan (0.368%, ≥40 years) [19], and Beijing, Xian, and Shanghai, China (1%, ≥55 years) [21]. The prevalence of PD in the present population increased with age, which was in line with the previous studies [19,25]. Hcy employs multiple neurotoxic mechanisms that have been associated with the pathogenesis of neurodegenerative disorders [9]. Among subjects with blood samples in the present study, most (72.1%) had hyperhomocysteinemia. Several studies regarding hyperhomocysteinemia were conducted in China. Compared to the study of Wang et al. in East China (10.5 mmol/L, 45-75 years) [13], our study revealed a higher Hcy mean (24.6 mmol/L) [20]. Hao et al. also showed lower Hcy medians (8.8 mmol/L for South China and 11.4 mmol/L for North China) [26] than ours (19.3 mmol/L) [20]. However, the prevalence of PD in this area was not higher than that on behalf of China (1%, ≥55 years) [21], which indicated that Hcy may be not involved in the pathogenesis of PD.
Several studies found elevated plasma Hcy levels in PD patients treated with levodopa [27][28][29]. Because of the influence of levodopa, naive PD patients were proper cases for the investigation of the true relationship between Hcy and PD. Some studies had evaluated Hcy levels in untreated PD patients before and the results were contradictory [30,31]. Small sample sizes (range 15-30) might account for the divergence. A meta-analysis including 6 studies found that there was no significant difference in plasma Hcy levels between untreated patients and healthy controls [32]. In the present study, most PD cases (96.8%) were firstly diagnosed and did not receive treatment, which could be partially interpreted by low development degree of local area. There was no difference for Hcy, folate, and vitamin B12 levels between untreated PD cases and controls in this population. Our results were in accord with the meta-analysis. In univariate and multivariate analysis, hyperhomocysteinemia was not a risk factor of PD, as well as folate and vitamin B12 deficiency. Current smoking was found to be a protective factor for PD, which was widely accepted [33,34]. The above results indicated that Hcy did not play an important role in the pathogenesis of PD. In cohort studies, intake of folate or vitamin B12 was also not related to the risk of Parkinson's disease [35,36].
Our study had limitations. There were only 17 untreated PD cases in the case-control study and this could lead to bias. Patients from epidemiological study had common background and were more appropriate for case-control study than cases from hospitals. However, the epidemiological study was costly and hard to expand sample sizes. Nevertheless, our results were generally consistent with the previous studies [32,35,36] and credible.
Conclusions
The prevalence of PD was comparable to earlier studies conducted in Chinese populations and increased with age. There was no difference for Hcy, folate, and vitamin B12 levels between untreated PD cases and controls. Hyperhomocysteinemia was not a risk factor of PD, as well as folate and vitamin B12 deficiency. Hcy did not play an important role in the pathogenesis of PD.
|
v3-fos-license
|
2023-08-02T06:17:22.974Z
|
2023-07-01T00:00:00.000
|
260350176
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0011514&type=printable",
"pdf_hash": "979d84a84ff574f473874a03b310aa93d8e2e1d5",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:856",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"sha1": "00593428fbd558aa369dc0e1c3dfc62bf2e1ed73",
"year": 2023
}
|
pes2o/s2orc
|
Towards the sustainable elimination of gambiense human African trypanosomiasis in Côte d’Ivoire using an integrated approach
Background Human African trypanosomiasis is a parasitic disease caused by trypanosomes among which Trypanosoma brucei gambiense is responsible for a chronic form (gHAT) in West and Central Africa. Its elimination as a public health problem (EPHP) was targeted for 2020. Côte d’Ivoire was one of the first countries to be validated by WHO in 2020 and this was particularly challenging as the country still reported around a hundred cases a year in the early 2000s. This article describes the strategies implemented including a mathematical model to evaluate the reporting results and infer progress towards sustainable elimination. Methods The control methods used combined both exhaustive and targeted medical screening strategies including the follow-up of seropositive subjects– considered as potential asymptomatic carriers to diagnose and treat cases– as well as vector control to reduce the risk of transmission in the most at-risk areas. A mechanistic model was used to estimate the number of underlying infections and the probability of elimination of transmission (EoT) was met between 2000–2021 in two endemic and two hypo-endemic health districts. Results Between 2015 and 2019, nine gHAT cases were detected in the two endemic health districts of Bouaflé and Sinfra in which the number of cases/10,000 inhabitants was far below 1, a necessary condition for validating EPHP. Modelling estimated a slow but steady decline in transmission across the health districts, bolstered in the two endemic health districts by the introduction of vector control. The decrease in underlying transmission in all health districts corresponds to a high probability that EoT has already occurred in Côte d’Ivoire. Conclusion This success was achieved through a multi-stakeholder and multidisciplinary one health approach where research has played a major role in adapting tools and strategies to this large epidemiological transition to a very low prevalence. This integrated approach will need to continue to reach the verification of EoT in Côte d’Ivoire targeted by 2025.
Methods
The control methods used combined both exhaustive and targeted medical screening strategies including the follow-up of seropositive subjects-considered as potential asymptomatic carriers to diagnose and treat cases-as well as vector control to reduce the risk of transmission in the most at-risk areas. A mechanistic model was used to estimate the number of underlying infections and the probability of elimination of transmission (EoT) was met between 2000-2021 in two endemic and two hypo-endemic health districts. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111
Introduction
Human African trypanosomiasis (HAT) is a parasitic disease caused by trypanosomes that are transmitted to humans by tsetse [1]. Trypanosoma brucei gambiense (T. b. gambiense) is responsible for a chronic form of HAT (gambiense HAT, gHAT) in West and Central Africa accounting for 97% of all reported HAT cases during 2001-2020 [2]. In comparison, T. b. rhodesiense is zoonotic and responsible for an acute form of HAT (rhodesiense HAT, rHAT) in East and South Africa. Both forms can be deadly if left untreated [3]. From the 1970s to the 1990s, HAT experienced a phase of emergence/re-emergence that resulted in a significant increase in the number of cases, peaking at 37,385 reported cases recorded in 1998 [4]. The response to increasing cases, which was organised around national programmes of endemic countries dedicated to HAT control supported by the World Health Organization (WHO) and partners, was effective, and the number of cases reported annually fell below 10,000 in 2009 [5]. gHAT was then included in the WHO roadmap for neglected tropical diseases (NTDs) in 2012, with the goal of elimination as a public health problem (EPHP) by 2020 [6] and subsequently elimination of transmission (EoT) to humans by 2030 [2].
Two main global indicators were subsequently defined to monitor the EPHP process: 1) reducing the annual number of cases to fewer than 2,000 per year by 2020; and 2) a 90% reduction in the area at moderate, high or very high risk (the latter defined as an area that reports in excess of one case/10,000 people/year, averaged over a five-year period). The 90% reduction refers to the period 2016-2020 compared to the 2000-2004 reference period. With fewer than 1,000 cases reported each year since 2018 (565 gHAT and 98 rHAT cases were reported in 2020) the first global indicator was met, however a reduction of 120,000 km 2 (83%) in the moderate and high-risk area meant the global 2020 EPHP target was slightly missed but was thought to be achievable by 2022 [2]. Country-specific validation of EPHP by the WHO is conducted through the compilation of a dossier of data to demonstrate that the indicator for national EPHP has been achieved. This indicator is defined as an average of less than one case per 10,000 people per year over a five-year period in each health district. Togo and Côte d'Ivoire were the first countries to be validated by WHO as having achieved EPHP of gHAT [2].
Achieving this goal was challenging in Côte d'Ivoire as the country reported around one hundred gHAT cases a year in the early 2000s [7], the majority of which were in endemic foci in Western-Central forest areas [8]. Since 2009 fewer than 10 gHAT cases were reported annually with the reduction in transmission and detected case numbers driven by active case detection by mobile teams that screen at-risk populations [7,9]. This large epidemiological transition to very low incidence necessitates the adaptation and evolution of control strategies since there are diminishing returns for the same level of effort. This experience is shared by other elimination or eradication programmes such as polio [10] or Guinea worm [11] where previous rapid reductions in case reporting have now been replaced by extremely low but persistent case detections and new intervention approaches have been required. To overcome this end-game challenge, gHAT strategies in Côte d'Ivoire have now shifted to utilise innovative tools both within the classical "screen, diagnose and treat" algorithm and by use of complementary vector control [12]. This article describes the approach led by the Programme National d'Elimination de la Trypanosomiase Humaine Africaine (PNETHA) by focusing on the case reporting results obtained during the period 2015-2019 that enabled the validation of EPHP in Côte d'Ivoire. We also used a mathematical modelling approach to quantitatively evaluate these reporting results and infer progress towards the achievement of EoT of gHAT, demonstrating the importance of an integrated, data-driven approach for sustainable elimination.
Materials and methods
In this section we describe details of the epidemiological context of gHAT in Côte d'Ivoire, the recent activities in intervention and screening by PNETHA, and outline how we have utilised mathematical modelling to retrospectively analyse case reporting results. The paper focuses on the 2015-2019 period on which the EPHP was based, but also takes into account the number of cases reported by PNETHA historically (2000-2014) and after the EPHP dossier was submitted (2020-2021) to further demonstrate the impact of the programme through modelling.
Hygiene in Côte d'Ivoire (reference 030-18/MSHP/CNER-kp). Prior to inclusion, each potential study participant was informed about the objectives, conduct, benefits and risks of the study in the language of their choice in order to obtain verbal informed consent.
Study area
gHAT is characterised in Côte d'Ivoire, as it is throughout Africa, as a focal disease with hotspots of infection [13,14]. Although the focus boundaries are difficult to precisely define, the foci were considered to be epidemiological units until 2014 (S1 Fig). Subsequently, health districts (HD) have been used as the epidemiological units of analysis as a requirement for reporting to WHO for the EPHP. The number of reported cases per HD in Côte d'Ivoire between 2000 and 2014 is given in S1 Table. In 2015 HDs in Côte d'Ivoire can be categorised into four distinct groups (Fig 1) Like most other neglected tropical diseases (NTDs), gHAT is not vaccine preventable, so control and elimination activities have been focused on two key parts of the transmission cycle: 1. Treatment of infected individuals which acts both to prevent disease mortality but also to reduce the time people spend infected and infectious to tsetse vectors.
2. Targeting the tsetse vector with the goal of reducing the number of transmission events.
Unlike several other NTDs, the treatment pathway for gHAT currently requires confirmation of infection prior to treatment, so mass drug administration or "screen-and-treat" strategies cannot currently be used. The range of medical and vector interventions routinely recommended and deployed to tackle gHAT are presented elsewhere [3]. However, the gHAT control strategy is variable from region to region and over time and so here we outline the different screening, diagnostic and vector control approaches that have formed part of the gHAT elimination strategy in Côte d'Ivoire during the 2015-2019 period.
Active screening
Active screening can be exhaustive for the entire population of a given area, or targeted at a population particularly at risk. In all cases, the first step for active screening carried out by the mobile teams involved informing the administrative and customary authorities and raising awareness among the populations targeted for screening-typically exhaustive active screening has aimed to recruit as many people as possible in villages identified for screening. Diagnosis of exhaustive active screening was made based on the decision algorithm shown in Fig 2A. The card agglutination test for trypanosomiasis (CATT) [15] on whole blood collected by finger puncture (CATTb) was carried out in all the people who presented themselves to the mobile team during the screening activity. In the case of a positive result with CATTb, a sample was taken from the bend of the elbow (5ml of heparinized blood) to perform the CATT on plasma dilution (CATTp). "Seropositives" were defined to be individuals with a positive CATTp at a dilution of at least ¼ (CATTp�1/4) underwent parasitological examination: mini anion exchange centrifugation test with buffy coat (mAECT BC, [16]) and microscopic examination between slide and coverslip (x400) of lymph node aspirate (LNA) in cases where cervical lymphadenopathy was present. A seropositive who was positive on at least one parasitological examination was confirmed as a gHAT case, denoted by T.
Seropositives negative in parasitology were tested with the immune trypanolysis test (TL) [17], performed at Centre International de Recherche-Développement sur l'Elevage en zone Subhumide (CIRDES, Bobo-Dioulasso, Burkina-Faso). TL-positive subjects were considered to be "TL-seropositive" (denoted by S) and potential carriers of trypanosomes. A targeted active screening strategy consisted of a follow-up of these TL-seropositive subjects once a year according to the algorithm in Fig 2A until the serological tests were negative or until parasitological confirmation and treatment.
To be more effective in a context of low prevalence, other targeted active screening strategies were put in place. These included door-to-door [8] and spatial follow-up [18] by which it was possible to screen the family in a more friendly fashion as well as the most-at-risk populations that share the same daily spaces as the gHAT cases and TL-seropositive. The diagnostic algorithm of this targeted active screening was the same as the exhaustive active screening (Fig 2A).
Targeted active screening was also performed at villages previously identified as being at greatest risk in a given area based on historical, epidemiological and geographical data as defined by the so-called "identification of villages at risk" (IVR) strategy [19]. This strategy was mainly applied in the historical HDs. During IVR activities, clinical and epidemiological suspects were tested with a simplified algorithm (Fig 2B). Serology was based on CATTb or a rapid diagnostic test (RDT SD1, Abbott Diagnostics, South Korea) [20] and parasitology performed on CATTb or RDT positive subjects, was based only on LNA if enlarged lymph nodes were present. For CATTb or RDT positive subjects, a sample of blood dried on filter paper was collected to perform TL. In case of LNA-positive subjects (confirmed gHAT cases) or when the TL was positive on at least one LNA-negative subjects, the village concerned was automatically selected for exhaustive active screening. The LNA-negative and TL-positive subjects were gHAT diagnosis algorithm for targeted active screening conducted during the "identification of villages at risk" strategy. CATTb = card agglutination test for trypanosomiasis performed on whole blood; CATTp = card agglutination test for trypanosomiasis performed on plasma dilution; RDT = rapid diagnostic test; mAECT BC = miniature anion-exchange centrifugation technique performed using buffy coat; LNA = lymph node aspirate; SC-CSF = simple centrifugation of cerebrospinal fluid; WBC = CSF white blood cell/μl;-= negative; + = positive. The "identification of villages at risk" (IVR) strategy was previously described [19].
https://doi.org/10.1371/journal.pntd.0011514.g002 tested using mAECT BC. The mAECT positive subjects were confirmed gHAT cases and the negative ones were considered as TL-seropositive subjects targeted for follow-up as described above.
Stage diagnosis was performed for all confirmed gHAT cases and was based on the technique of simple centrifugation of cerebrospinal fluid (SC-CSF) to allow visualisation of trypanosomes [21] and in the white blood cell (WBC) count in CSF (as WBC/μl). Cases who were negative for trypanosomes in CSF and with � 5 WBC/μl were classified as having stage 1 disease, typically with mild or no symptoms. Cases who had trypanosomes visualised in CSF or with > 5 WBC/μl were considered to be in stage 2, typically with neurological disorders. People in stage 1 disease were treated with pentamidine and those in stage 2 with nifurtimox-eflornithine combination therapy (NECT) [22]. Post-treatment follow-up including lumbar puncture was performed only in the case of clinical suspicion of relapse as recommended by WHO [3].
Passive screening
The national health system in Côte d'Ivoire is based on a three-level pyramid, with health centres that serve as an entry point at the first and peripheral level, general and regional hospitals at the secondary level and university hospitals and specialized institutes at the tertiary level. These public health structures are complemented by private clinics and hospitals as well as a network of traditional medicine. According to the Ministry of Health in 2018, indicators of human resource availability were nationally at 1.4 physicians per 10,000 population and 2.3 nurses per 5,000 population. The national average rate of service use was 47.5% with wide disparities across health regions.
Passive screening refers to diagnosis made in fixed health facilities and was based on clinical suspicion, with the following symptoms considered suggestive of gHAT disease: 1) long-term fever and no effect of antimalarial treatment, 2) headache for a long period (> 14 days), 3) presence of enlarged lymph nodes in the neck, 4) severe weight loss, 5) weakness, 6) severe pruritus, 7) amenorrhea, abortion(s), or sterility, 8) psychiatric problems (aggressiveness, apathy, mental confusion, unusual increasing hilarity), 9) sleep disturbances (nocturnal insomnia and excessive daytime sleep), 10) motor disorders (abnormal movements, tremor, difficulty walking), 11) speech disorders, 12) convulsion, 13) coma [23]. All subjects in whom at least one of these symptoms was observed were considered clinical suspects. The algorithm used for active screening and described in Fig 2A was applied at the Projet de Recherches Cliniques sur la Trypanosomiase (PRCT, Daloa) reference center for the diagnosis and treatment of gHAT and the only sentinel site for passive screening until 2017.
In August 2017, passive screening was set up in 10 health centres of the endemic HDs of Bouaflé (Bonon focus) and Sinfra as part of the DiTECT-HAT research [23]. The diagnostic algorithm that was used is presented in Fig 3A. Clinical suspects were tested with three RDTs (SD1, HAT Sero-K-Set (Coris BioConcept, Belgium), and rHAT Sero-Strip (Coris BioConcept, Belgium)). Subjects who tested positive with at least one RDT (seropositive) were tested by parasitology (mAECT BC and LNA). Staging was performed on confirmed parasitological cases. TL on blood dried on filter paper [24] was then carried out on RDT-positive but parasitology-negative subjects. TL-positive subjects were considered TL-seropositive targeted for targeted active screening as described above.
In May 2018, passive screening was also set up in 13 health centres in the hypo-endemic HDs. The 13 health centres were selected based on epidemiological data (geographical distribution of the last cases detected) and their areas of influence, and in five neurology or psychiatry services in Bingerville (1), Bouaké (2) and Abidjan (2) (Fig 4) for national coverage. The diagnostic algorithm used was based on the RDT SD1 ( Fig 3B). The 18 sites were supervised every three months and capacity building for doctors, nurses and laboratory technicians was carried out every year to optimise the effectiveness of the gHAT monitoring programme.
Strengthening of the capacities of the health workers, including the gHAT clinical suspicion and diagnosis, preceded the implementation of passive screening in these 28 gHAT sentinel sites.
Vector control
Vector control (VC) provides a complementary method to screening and treatment and has been used to reduce tsetse populations and interrupt gHAT transmission in a variety of geographies (e.g. Chad, Democratic Republic of Congo, Guinea, and Uganda) [25,26,27]. In Côte d'Ivoire, VC mostly using Tiny Targets [28,29] but also Vavoua traps [30] both impregnated with deltamethrin, began in January 2016 in the HD of Bouaflé (Bonon focus). The first three years (until December 2018) of the Bonon intervention has been described elsewhere [31,32] but we summarise this and provide an update on the results obtained until December 2019. The first deployment in Bonon took place in February 2016 with 1,890 Tiny Targets. During annual redeployments in February 2017, February 2018 and February 2019, additional Tiny Targets were added to reach a total of 2,016 deployed in February 2019 ( Fig 5A). It is during this redeployment that 57 Vavoua traps were set to reinforce VC in areas where tsetse were still being caught during periodic entomological assessments. A more targeted VC began in May 2017 in the HD of Sinfra; the intervention aimed to control tsetse densities by deploying Tiny Targets supplemented by Vavoua traps at human/tsetse contact points in areas where the risk of transmission was believed to be highest. A total of 736 Tiny Targets were deployed in May 2017 and redeployed in July 2018 and July 2019. In July 2018, 115 additional Tiny Targets and 44 Vavoua traps were also deployed to reinforce VC in areas where tsetse were still being caught during periodic entomological assessments. In July 2019, the 44 Vavoua traps were replaced by Tiny Targets but 12 other additional Vavoua traps were set. At the end of 2019, 895 Tiny Targets and 12 Vavoua traps were deployed in the Sinfra HD ( Fig 5B).
A T0 entomological survey using unimpregnated Vavoua traps for capture was conducted in the two foci before the first deployment of Tiny Targets. This T0 survey made it possible, in addition to sensitising local communities, (i) to delineate the intervention areas, (ii) to characterise the tsetse populations (species, densities, distribution) and (iii) to determine the distribution of control devices to be deployed. All traps were set for 48 or 96 hours and georeferenced using a GPS. Fly collection was made for two or four consecutive days and captured tsetse were identified by species with a magnifying glass and an identification key [33]. Apparent densities of flies per trap per day (ADTs) were determined.
To monitor the results of the VC campaign, the locations of sentinel traps (unimpregnated Vavoua traps for capture) were selected from those set during the T0 survey, on the basis of the ADT and to ensure homogeneous spatial coverage of the study area. These sentinel traps represented 10% of the total traps in T0. Quarterly entomological assessments used 30 sentinel traps in Bonon and 35 sentinel traps in Sinfra. All traps were set for 48 hours. Fly collection was made for two consecutive days and captured tsetse were identified by species [33]. ADTs were compared with those of T0.
Modelling
For the modelling approach, we report on data for the period 2000-2021. S1 Data provides detailed active and passive case data partitioned by stage for the locations where there are sufficient data to conduct model fitting (Bonon, Bouaflé subprefecture, Daloa, Oumé and Sinfra); only HDs with 10 or more data points (one data point is counted as a year with at least one passive case detected, or one active screening) were included as fewer data points were not deemed sufficient to provide a robust model fit. Other HDs are not included in that file and are omitted from model fitting, but aggregate case data (totals per year per location) are included in S1 Table. Following implementation of the different interventions and data collection we used mathematical modelling to assess these data and provide quantitative analysis of progress in Côte d'Ivoire. The modelling was used to estimate the underlying number of new infections each year between 2000 and 2021 and therefore assess the reduction in transmission over this time period. To achieve this, a previously developed mechanistic model (the "Warwick gHAT model"), originally published in [34], and more recently updated for the DRC [35] and Chad [36], was adapted for the context in Côte d'Ivoire and fitted to the annual time series data collected by PNETHA. The model has recently been described in detail elsewhere [35], and model equations and parameters are given in the S1 Text. Briefly, this model captures the natural history of infection in humans from exposure to the parasites and the relatively long progression through stage 1 and, if not treated, stage 2 infection. We simulate detection and treatment of cases via both active and passive screening, with the number of active cases identified within a year linked to the number of people tested by mobile teams. Tsetse are explicitly included in the model to capture human-tsetse-human transmission cycles and, furthermore, it is assumed that there is differential risk in exposure to tsetse bites between different people. As described in the vector control section, from 2016 and 2017, vector control was implemented in Bonon (in Bouaflé HD) and Sinfra, respectively, and the corresponding reduction in tsetse populations is included in the transmission model for these foci. Tsetse trap data were used to inform the parameters associated with observed tsetse population reduction by fitting the tsetse population dynamic sub-model to these data via maximum likelihood estimation (see S1 Text).
To fit the full deterministic epidemiological model to data we use a Markov chain Monte Carlo (MCMC) methodology (see S1 Text) which compares annual active and passive case reporting simulated in the model to those observed in the data for each year for each HD. We carried out fitting for the two endemic HDs-Bouaflé and Sinfra-and two hypo-endemic HDs-Daloa and Oumé. Other listed hypo-endemic HDs and the historical and non-endemic HDs were not included in model fitting due to insufficient data points. As Bouaflé HD is comprised of two epidemiological foci-Bouaflé subprefecture and Bonon-and only one of these had the vector control intervention deployed there, for the Bouaflé HD fitting we considered these small geographical units independently before aggregating to HD level.
The fitted model parameters are found in Table D in S1 Text and include the basic reproduction number (R 0 ), the relative risk of high-risk people being bitten by tsetse compared to low-risk people (r), the proportion of the population who are low risk (k 1 ), the proportion of stage 2 cases that go on to be reported (as opposed to those that die undetected) (u), and passive detection rates (η H , γ H ). In some years the number of people tested in active screening was unknown, consequently this value was inferred during fitting. All these parameters were fitted independently in each region as they are assumed to be geographically variable. Prior parameter distributions for these fitted parameters and fixed parameter values can be found in the Tables D and C in S1 Text, respectively.
In addition to inferring model parameters, the fitting process enabled us to estimate the expected number of annual new infections in each HD during the data period (2000-2021) and therefore to quantify the reduction in transmission over time. It also allowed us to calculate the probability that each location had achieved local EoT by a given year.
gHAT situation during the 2000-2014 period
With the results of this article focusing on the 2015-2019 period, it is apt to describe the epidemiological situation observed before this period, based on the number of cases reported between 2000 and 2014 by the PNETHA (S1 Table). A total of 650 gHAT cases were reported, most of them from the Bonon subprefecture (323) and Sinfra HD (176) where the last two epidemics were recorded: early 2000s for Bonon [37,38] and mid-1990s for Sinfra [39,40]. During this period, 151 cases were recorded in the hypo-endemic HD (from one case in Gagnoa, Issia and Zuénoula HD to 50 cases in the Daloa HD). In all HDs, the number of cases gradually
Active screening during 2015-2019
The results of exhaustive active screening activities carried out between 2015 and 2019 in the endemic HDs of Bouaflé and Sinfra and the hypo-endemic HD of Aboisso are presented in Table 1. A total of 13,074 people were tested, a single confirmed case of gHAT was detected in the HD of Bouaflé in 2015 and four TL-seropositives were identified (three in Bouaflé and one in Sinfra). The results of the exhaustive active screening activities carried out between 2017 and 2019 in the historical HDs are presented in S2 Table. A total of 28,796 people were tested and no gHAT cases or TL-seropositives were identified. Table 2 presents the results of targeted active screening conducted on populations sharing the same spaces as the last gHAT cases and TL-seropositives identified mainly in the HDs of Sinfra and Bouaflé, considering the results of spatial follow-ups in particular. A total of 3,105 people were tested between 2017 and 2019 but no cases or TL-seropositive individuals were identified. The results of targeted active screening activities using the IVR method by HD between 2017 and 2019 are presented in S3 Table. A total of 5,093 clinical and epidemiological suspects were tested but no cases or TL-seropositives were identified.
Follow-up results of TL-seropositive subjects are shown in Table 3. A total of 97 subjects were followed and tested. One case was detected in 2017 in the HD of Bouaflé and one in 2019 in Sinfra. In 2019, 18 subjects were still serologically positive and four of them were positive for both CATTp and TL. The case detected in Sinfra had been monitored for more than 20 years while living in Abidjan for 15 years. The case of Bouaflé was first identified as a TL-seropositive in 2014 and he stayed in his village.
Passive screening during 2015-2019
Between 2015 and 2019 a total of 169 people were tested and two cases of gHAT detected at the PRCT of Daloa, the reference centre for the diagnosis and treatment of gHAT recognized as such for decades by the populations of the gHAT foci of the West Central Côte d'Ivoire [9]. Both of the cases were detected in 2015 from the 33 people tested that year. One of the two cases was from the HD of Sinfra and the other from the HD of Bouaflé. No further cases were detected from the 136 people tested at PRCT Daloa between 2016 and 2019. Table 4 presents the results of the passive screening implemented between 2017 and 2019 in the endemic HDs of Sinfra and Bouaflé. A total of 3,433 clinical suspects were tested and two cases were reported in 2017, one in Sinfra and one in Bouaflé. They were diagnosed as stage 2 infections (the case in Sinfra was very advanced) as already described [23]. A third person, positive with the three RDTs and TL but negative in parasitology identified in Bouaflé HD in 2018, died following a sudden neurological deterioration without it being possible to confirm the gHAT diagnosis using further parasitological investigations. Given the strong clinical and serological suspicion then confirmed by other serological and molecular tests, this case was considered a serological gHAT case, i.e. a confirmed case, in the PNETHA registers. No cases or TL-seropositives were identified in 2019.
The results of the passive screening implemented in 2018 and 2019 as part of sentinel site surveillance are shown in S4 Table. A total of 605 clinical suspects were tested, including 84 in national coverage facilities and 521 in hypo-endemic HDs. While five individuals were RDTpositive in hypo-endemic HDs, no cases or TL-seropositives were identified.
A case of gHAT was confirmed and treated in 2018 in Koudougou in Burkina Faso as part of the passive surveillance set up there. The epidemiological investigation showed that this case was most likely infected near Bonon where he lived from 2001 (his birth) to 2018 before moving to Koudougou for health reasons. The clinical questionnaire revealed significant neurological damage linked to an infection dating back several years. The case was included in the PNETHA registers as a confirmed case in 2018 from the HD of Bouaflé.
Therefore, in total, nine cases of gHAT were detected between 2015 and 2019 that were likely to have been infected in Côte d'Ivoire, six in the HD of Bouaflé and three in that of Sinfra Table 7 gives data for the national indicator for EPHP as defined by the WHO (average number of gHAT cases per year over 5 consecutive years/10,000 inhabitants, by HD [2]). Only the HDs of Bouaflé and Sinfra are shown as these are the only HDs in which cases were reported between 2015 and 2019. Relative to the total population of the two HDs, the indicator was far below 1/10,000, a necessary condition for validating the EPHP.
Vector control
In Bonon, 267 traps were set during the T0 survey carried out in June 2015. (Fig 6B). By 2011 there was extremely little case reporting across all foci. Fig 8 shows the year in which transmission was estimated to be interrupted for each HD-for this calculation we utilised the analogous stochastic version of our model and the posterior parameterisation to better factor in chance events around EoT and remove the need to use a proxy threshold to compute EoT using deterministic outputs (see S1 Text for more details). In the Bouaflé subprefecture, Daloa, and Oumé, we computed that there was a moderate probability of EoT occurring in 2015 or earlier, however there is some considerable uncertainty in the year of EoT in these locations. In Bonon and Sinfra the use of highly impactful VC (from 2016 and 2017, respectively) coupled with the low or zero case reporting in recent years, results in model Table 8 and Table H in S1 Text).
The reductions are computed through model fitting to historical data with the deterministic model. Medians and 95% credible intervals are given.
Discussion
gHAT control activities in Côte d'Ivoire have been based on an integrated approach, consisting of a combination of medical interventions (active and passive screening followed by treatment) and vector control. The results of active screening and identification of villages at risk have shown that there is most likely very little or no transmission of T. b. gambiense in historical HDs. Indeed, no gHAT cases or TL-seropositives were identified out of nearly 34,000 people tested between 2017 and 2019. Exhaustive and targeted active screening and passive screening activities also support the hypothesis of low or no transmission in hypo-endemic HDs with no cases detected even in the PRCT of Daloa. The results of active screening have shown a clear reduction in the reported prevalence of the disease in the HDs of Bouaflé and Sinfra. They have also justified the gradual abandonment of exhaustive active screening in favour of targeted active screening and passive screening already described in several gHAT foci [41]. These strategies, however, confirmed that two HDs still had an extremely low number of cases, all in second stage, during 2015-2019. The results shown in the present study confirm the continued trend of a decrease in case reporting already observed since the beginning of the 2000s and the discovery of the last active focus of gHAT in Côte d'Ivoire [42,43], and this is despite the socio-political crisis that Côte d'Ivoire went through between 2002 and 2012 [9,38]. Monthly supervision and annual retraining of the health workers involved in this project have contributed greatly to the effectiveness of the implemented strategy and to the reliability of data.
Modelling suggests that there is a corresponding decrease in underlying transmission, and all HDs have a very high probability that EoT has already occurred in Côte d'Ivoire. Collected data confirm the importance of having adapted screening strategies by targeting areas and populations at risk and which made it possible to detect the majority of the remaining gHAT cases [8,23,44,45]. The fact that all the notified cases were in stage 2 of the disease indicates that these are likely to be relatively old infections and there is probably an absence of recent transmission.
The vector control carried out in the HDs of Bouaflé (Bonon focus) and Sinfra led to a sharp drop in tsetse densities from the first deployment of Tiny Targets and/or traps. A tsetse density reduction of more than 90% was rapidly achieved in each focus and maintained until the end of 2019. The presence of residual populations of tsetse was maintained in conserved forests consisting essentially of sacred forests (often on the outskirts of villages) in which the laying of screens and traps was often forbidden. These forests constitute favourable biotopes for tsetse, due to the presence of free ranging domestic pigs which frequent them regularly and constitute an ideal source of food [31,46,47], in addition to other possible hosts such as reptiles. Pigs have already been described as a preferential feeding host for G. p. palpalis [48,49], the only tsetse species present in the two vector control areas. Nevertheless, vector control is believed to have had a substantial impact on the risk of transmission, as has already been described for the Bonon focus [31] and is supported for both Bonon and Sinfra by the modelling analysis conducted as part of the present study.
The gHAT epidemiology in Côte d'Ivoire also depends on the gHAT situation in neighbouring countries. Côte d'Ivoire has a border with five endemic gHAT countries: Liberia, Guinea, Mali, Burkina Faso and Ghana (Fig 1) with large cross-border mobilities that pose a risk of spreading gHAT from border countries to Côte d'Ivoire but also from Côte d'Ivoire to neighbouring countries. In the past, most of Côte d'Ivoire's historic foci were in direct contact with foci in neighbouring countries [50]. But since 2000, no gHAT cases have been detected in a cross-border foci and no cases in Côte d'Ivoire appear to have been infected in a neighbouring country, although we cannot rule that out. Since 2015, very few cases have been reported from neighbouring countries in which there no longer seem to be active foci except on the Guinean coast [2], which is very far from Côte d'Ivoire. The risk of gHAT spreading in Côte d'Ivoire from a neighbouring country is therefore very low. Cases imported from Côte d'Ivoire have been regularly reported in Burkina Faso due to the large historical and recent population movements between these two countries [9,51,52]. However, the decrease in prevalence in Côte d'Ivoire has reduced the risk of spread to Burkina Faso and the case detected in Koudougou in 2018 (infected in the Bouaflé HD) is the latest reported.
It is important in this article to mention other phenomena that have not prevented the achievement of the EPHP of gHAT in Côte d'Ivoire but which should be considered as key in regard to the EoT. This is particularly the case of the role of a domestic or wild animal reservoir in the T. b. gambiense epidemiology that is still under debate [53]. In Côte d'Ivoire, free-ranging pigs have been identified in the Sinfra, Bonon and Vavoua foci as a multi-reservoir of T. brucei and/or T. congolense with mixed infections of different strains [46,47]. This trypanosome diversity hinders the easy and direct detection of T. b. gambiense. It is important to stress both the lack of tools to prove or exclude with certainty the presence of T. b. gambiense, and the need of technical improvements to explore the role of pigs and animals in general, in the epidemiology of gHAT.
A residual human reservoir of T. b. gambiense could also compromise the EoT in areas where tsetse are still present. TL-seropositive individuals (positive with either CATT or RDTs and with the highly specific TL test, but negative with parasitological tests) have been identified in both endemic HD (Bouaflé and Sinfra), and in some hypo-endemic HD. If we have already shown that some of them experienced a spontaneous cure (and no longer pose a risk of transmission), we also observed that others are potential latent infections [22,54] and this is well illustrated by the two cases detected in 2017 and 2019 in Bouaflé and Sinfra HD, respectively. The case detected in Sinfra had been monitored for more than 20 years. Fortunately, he had been living in Abidjan for 15 years and probably did not pose any risk of transmission. This is not so for the Bouaflé case who stayed for three years in his village before being parasitologically confirmed. Fortunately, no other cases could be detected during the targeted active screening conducted between 2017 and 2019 on populations sharing the same spaces. The living area of this case was included in the VC campaign implemented in January 2016 in the Bouaflé HD, that may have limited the risk of transmission. In addition to these cases where infection is tolerated and diagnosis is difficult, there is also the difficulty of detecting gHAT cases in a context where the prevalence has become very low to the point that the disease is no longer considered a threat by the communities or by health workers. This is well illustrated by the complex health seeking pathway of the case passively diagnosed in 2017 in the Sinfra HD in which the first disease symptoms appeared three years earlier and the patient had visited several health care centers and hospitals in different cities [45].
The modelling analysis presented here used a previously developed mechanistic model which explicitly incorporated human-tsetse contact and parasite transmission as well as heterogeneities in exposure of people to tsetse blood feeding. Longitudinal case data was used to parameterise the model for each geographical location and the resultant model fits align well with reported active and passive cases. Nevertheless, it is acknowledged that this model variant does not incorporate the possibility for non-human animal-tsetse transmission cycles, nor potentially long-term asymptomatic human carriers. Either of these two possibilities could lead to more transmission events per detected case, and therefore to more pessimistic model outcomes [55,56]. Despite this, the extremely low case reporting across several years in Côte d'Ivoire may indicate that these transmission cycles (if they exist) are not sustaining transmission to humans; modelling analyses in the low-prevalence regions of the former Equateur province of the Democratic Republic of Congo [55] and the Mandoul focus of Chad [36] have found this kind of persistent low or zero reporting is suggestive of very limited or no infection contribution from non-human animals. Furthermore, in the foci with vector control, the large reduction in tsetse population density will have reduced transmission between tsetse and any potential infection source (animal or human).
The dynamic tsetse population sub-model used here includes the pupal stage of development as well as adult flies; this enabled us to model some resurgence of fly populations between Tiny Target deployments. This type of bounceback was included in the model to capture a plausible biological mechanism for tsetse population growth between vector control deployment and this model matched fly catches well. We acknowledge that it is possible for bounceback to also occur through reinvasion of flies from neighbouring regions with no control and that other sources of tsetse-related data including habitat or climate data might be useful in trying to elucidate drivers of bounceback in different locations, especially after target deployments are stopped, or to predict potential pockets of high tsetse density, however these data require the use of alternative geostatistical modelling [57] which is beyond the scope of the present study.
While we use a stochastic simulation to model the human population, we have used a deterministic ODE-based approach to model tsetse dynamics. In general, a stochastic model would be preferred, especially at very low prevalence, however due to lack of data on the total tsetse population and inability to uncouple the size of tsetse population from the probability of infection per bite, we must instead fit a relative vector density [58]. This means that we are no longer modelling a discrete population of vectors, but a continuous density so a stochastic model is infeasible. Due to the slow dynamics of gHAT and short life-span of tsetse, however, we expect this to have minimal impact on our estimates of elimination. In this study the focus was on past transmission, however we do provide illustrative projections for the probability of EoT in Fig 8 and Fig M in S1 Text. These projections assume the continuation of the current strategy in all health districts, however further work should be done to explore a range of plausible future strategies including scaling back. We recommend that these type of model projections are also coupled with health economic evaluations which could be used to assess what, how much and where investment is needed for the gHAT programme in Côte d'Ivoire to quantify pathway to country-wide EoT, verification of EoT, and also to consider what constitutes an efficient package of interventions to reach this target. As a preliminary study, a recent paper examined the costs of vector control using Tiny Targets in the Bonon focus from 2016 to 2017 [32].
This article summarises the information provided in the dossier that led to the WHO's validation of EPHP in December 2020 [59]. This success was achieved through an integrated approach combining medical screening and vector control interventions [12] and an integrated multi-stakeholder and multidisciplinary approach often needed in the fight against other infectious diseases including NTDs [60]. Research has played a major role in adapting tools and strategies to new epidemiological realities that present novel challenges. Moving towards the future, the strategies that will be put in place will have to be increasingly effective by targeting the areas and populations most at risk, to diagnose the last cases and minimise the risk of transmission via restriction of the human-tsetse and tsetse-T. b. gambiense contact.
The objective in Côte d'Ivoire is now to reach EoT by 2025. This requires continuing to adapt the control strategies. For the 2023-2025 step, focus will be on passive screening at the national scale and on reactive and targeted active screening including the follow-up of TL-seropositive subjects and people who share their places of life. Medical and entomological capacities for reaction will be maintained, should any case be identified in the country. It is also crucial to consider some new challenges, including (i) the potential pig reservoir of T. b. gambiense and its consequences on gHAT transmission, and (ii) community engagement to continue implementing suitable control strategies in a context where rare cases, if any, will be diagnosed. All the activities will be carried out in order to be able to compile the necessary information for the request for verification of EoT that may be submitted by the Ministry of Health to WHO in 2025.
|
v3-fos-license
|
2018-04-26T18:25:46.827Z
|
2018-03-29T00:00:00.000
|
4588148
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.aclweb.org/anthology/D18-1350.pdf",
"pdf_hash": "faa8f9a296320def3a0629b6d3acf9d84f2f45f0",
"pdf_src": "ACL",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:858",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "6a377115d0026bca3f67ed34ea03543880f6a2e3",
"year": 2018
}
|
pes2o/s2orc
|
Investigating Capsule Networks with Dynamic Routing for Text Classification
In this study, we explore capsule networks with dynamic routing for text classification. We propose three strategies to stabilize the dynamic routing process to alleviate the disturbance of some noise capsules which may contain “background” information or have not been successfully trained. A series of experiments are conducted with capsule networks on six text classification benchmarks. Capsule networks achieve state of the art on 4 out of 6 datasets, which shows the effectiveness of capsule networks for text classification. We additionally show that capsule networks exhibit significant improvement when transfer single-label to multi-label text classification over strong baseline methods. To the best of our knowledge, this is the first work that capsule networks have been empirically investigated for text modeling.
Introduction
Modeling articles or sentences computationally is a fundamental topic in natural language processing. It could be as simple as a keyword/phrase matching problem, but it could also be a nontrivial problem if compositions, hierarchies, and structures of texts are considered. For example, a news article which mentions a single phrase "US election" may be categorized into the political news with high probability. But it could be very difficult for a computer to predict which presidential candidate is favored by its author, or whether the author's view in the article is more liberal or more conservative.
Earlier efforts in modeling texts have achieved limited success on text categorization using a simple bag-of-words classifier (Joachims, 1998;Mc-Callum et al., 1998), implying understanding the meaning of the individual word or n-gram is a necessary step towards more sophisticated models. It is therefore not a surprise that distributed representations of words, a.k.a. word embeddings, have received great attention from NLP community addressing the question "what" to be modeled at the basic level (Mikolov et al., 2013;Pennington et al., 2014). In order to model higher level concepts and facts in texts, an NLP researcher has to think cautiously the so-called "what" question: what is actually modeled beyond word meanings. A common approach to the question is to treat the texts as sequences and focus on their spatial patterns, whose representatives include convolutional neural networks (CNNs) (Kim, 2014;Zhang et al., 2015;Conneau et al., 2017) and long shortterm memory networks (LSTMs) (Tai et al., 2015;Mousa and Schuller, 2017). Another common approach is to completely ignore the order of words but focus on their compositions as a collection, whose representatives include probabilistic topic modeling (Blei et al., 2003;Mcauliffe and Blei, 2008) and Earth Mover's Distance based modeling (Kusner et al., 2015;Ye et al., 2017).
Those two approaches, albeit quite different from the computational perspective, actually follow a common measure to be diagnosed regarding their answers to the "what" question. In neural network approaches, spatial patterns aggregated at lower levels contribute to representing higher level concepts. Here, they form a recursive process to articulate what to be modeled. For example, CNN builds convolutional feature detectors to extract local patterns from a window of vector sequences and uses max-pooling to select the most prominent ones. It then hierarchically builds such pattern extraction pipelines at multiple levels. Being a spatially sensitive model, CNN pays a price for the inefficiency of replicating feature detectors on a grid. As argued in (Sabour et al., 2017), one has to choose between replicating detectors whose size grows exponentially with the number of dimensions, or increasing the volume of the labeled training set in a similar exponential way. On the other hand, methods that are spatially insensitive are perfectly efficient at the inference time regardless of any order of words or local patterns. However, they are unavoidably more restricted to encode rich structures presented in a sequence. Improving the efficiency to encode spatial patterns while keeping the flexibility of their representation capability is thus a central issue.
A recent method called capsule network introduced by Sabour et al. (2017) possesses this attractive potential to address the aforementioned issue. They introduce an iterative routing process to decide the credit attribution between nodes from lower and higher layers. A metaphor (also as an argument) they made is that human visual system intelligently assigns parts to wholes at the inference time without hard-coding patterns to be perspective relevant. As an outcome, their model could encode the intrinsic spatial relationship between a part and a whole constituting viewpoint invariant knowledge that automatically generalizes to novel viewpoints. In our work, we follow a similar spirit to use this technique in modeling texts. Three strategies are proposed to stabilize the dynamic routing process to alleviate the disturbance of some noise capsules which may contain "background" information such as stop words and the words that are unrelated to specific categories. We conduct a series of experiments with capsule networks on top of the pre-trained word vectors for six text classification benchmarks. More importantly, we show that capsule networks achieves significant improvement when transferring singlelabel to multi-label text classifications over the compared baseline methods.
Our Methodology
Our capsule network, depicted in Figure 1, is a variant of the capsule networks proposed in Sabour et al. (2017). It consists of four layers: ngram convolutional layer, primary capsule layer, convolutional capsule layer, and fully connected capsule layer. In addition, we explore two capsule frameworks to integrate these four components in different ways. In the rest of this section, we elaborate the key components in detail.
N -gram Convolutional Layer
This layer is a standard convolutional layer which extracts n-gram features at different positions of a sentence through various convolutional filters. Suppose x ∈ R L×V denotes the input sentence representation where L is the length of the sentence and V is the embedding size of words. Let x i ∈ R V be the V -dimensional word vector corresponding to the i-th word in the sentence. Let W a ∈ R K 1 ×V be the filter for the convolution operation, where K 1 is the N -gram size while sliding over a sentence for the purpose of detecting features at different positions. A filter W a convolves with the word-window x i:i+K 1 −1 at each possible position (with stride of 1) to produce a column feature map m a ∈ R L−K 1 +1 , each element m a i ∈ R of the feature map is produced by where • is element-wise multiplication, b 0 is a bias term, and f is a nonlinear activate function (i.e., ReLU). We have described the process by which one feature is extracted from one filter. Hence, for a = 1, . . . , B, totally B filters with the same N -gram size, one can generate B feature maps which can be rearranged as
Primary Capsule Layer
This is the first capsule layer in which the capsules replace the scalar-output feature detectors of CNNs with vector-output capsules to preserve the instantiated parameters such as the local order of words and semantic representations of words. Suppose p i ∈ R d denotes the instantiated parameters of a capsule, where d is the dimension of the capsule. Let W b ∈ R B×d be the filter shared in different sliding windows. For each matrix multiplication, we have a window sliding over each Ngram vector denoted as M i ∈ R B , then the corresponding N -gram phrases in the form of capsule are produced with
ConvCaps Capsule
Probability column-list of capsules p ∈ R (L−K 1 +1)×d , each capsule p i ∈ R d in the column-list is computed as where g is nonlinear squash function through the entire vector, b 1 is the capsule bias term. For all C filters, the generated capsule feature maps can be rearranged as where totally (L − K 1 + 1) × C d-dimensional vectors are collected as capsules in P.
Child-Parent Relationships
As argued in (Sabour et al., 2017), capsule network tries to address the representational limitation and exponential inefficiencies of convolutions with transformation matrices. It allows the networks to automatically learn child-parent (or partwhole) relationships constituting viewpoint invariant knowledge that automatically generalizes to novel viewpoints. In this paper, we explore two different types of transformation matrices to generate prediction vector (vote)û j|i ∈ R d from its child capsule i to the parent capsule j. The first one shares weights W t 1 ∈ R N ×d×d across child capsules in the layer below, where N is the number of parent capsules in the layer above. Formally, each corresponding vote can be computed by: where u i is a child-capsule in the layer below and b j|i is the capsule bias term.
In the second design, we replace the shared weight matrix W t 1 j with non-shared weight matrix W t 2 i,j , where the weight matrices W t 2 ∈ R H×N ×d×d and H is the number of child capsules in the layer below.
Dynamic Routing
The basic idea of dynamic routing is to construct a non-linear map in an iterative manner ensuring that the output of each capsule gets sent to an appropriate parent in the subsequent layer: For each potential parent, the capsule network can increase or decrease the connection strength by dynamic routing, which is more effective than the primitive routing strategies such as max-pooling in CNN that essentially detects whether a feature is present in any position of the text, but loses spatial information about the feature. We explore three strategies to boost the accuracy of routing process by alleviating the disturbance of some noisy capsules: Orphan Category Inspired by Sabour et al. (2017), an additional "orphan" category is added to the network, which can capture the "background" information of the text such as stop words and the words that are unrelated to specific categories, helping the capsule network model the child-parent relationship more efficiently. Adding "orphan" category in the text is more effective than in image since there is no single consistent "background" object in images, while the stop words are consistent in texts such as predicate "s", "am" and pronouns "his", "she".
Leaky-Softmax
We explore Leaky-Softmax Sabour et al. (2017) in the place of standard softmax while updating connection strength between the children capsules and their parents. Despite the orphan category in the last capsule layer, we also need a light-weight method between two consecutive layers to route the noise child capsules to extra dimension without any additional parameters and computation consuming.
Coefficients Amendment
We also attempt to use the probability of existence of child capsules in the layer below to iteratively amend the connection strength as Eq.6.
Algorithm 1: Dynamic Routing Algorithm 1 procedure ROUTING(û j|i ,â j|i , r, l) 2 Initialize the logits of coupling coefficients b j|i = 0 3 for r iterations do 4 for all capsule i in layer l and capsule j in layer l + 1: for all capsule i in layer l and capsule j in layer l + 1: Given each prediction vectorû j|i and its probability of existenceâ j|i , whereâ j|i =â i , each iterative coupling coefficient of connection strength c j|i is updated by where b j|i is the logits of coupling coefficients. Each parent capsule v j in the layer above is a weighted sum over all prediction vectorsû j|i : where a j is the probabilities of parent capsules, g is nonlinear squash function Sabour et al. (2017) through the entire vector. Once all of the parent capsules are produced, each coupling coefficient b j|i is updated by: For simplicity of notation, the parent capsules and their probabilities in the layer above are denoted as v, a = Routing(û) whereû denotes all of the child capsules in the layer below, v denotes all of the parent-capsules and their probabilities a.
Our dynamic routing algorithm is summarized in Algorithm 1.
Convolutional Capsule Layer
In this layer, each capsule is connected only to a local region K 2 × C spatially in the layer below. Those capsules in the region multiply transformation matrices to learn child-parent relationships followed by routing by agreement to produce parent capsules in the layer above.
Suppose W c 1 ∈ R D×d×d and W c 2 ∈ R K 2 ×C×D×d×d denote shared and non-shared weights, respectively, where K 2 · C is the number of child capsules in a local region in the layer below, D is the number of parent capsules which the child capsules are sent to. When the transformation matrices are shared across the child capsules, each potential parent-capsuleû j|i is produced bŷ whereb j|i is the capsule bias term, u i is a child capsule in a local region K 2 × C and W c 1 j is the j th matrix in tensor W c 1 . Then, we use routingby-agreement to produce parent capsules feature maps totally (L−K 1 −K 2 +2)×D d-dimensional capsules in this layer. When using the non-shared weights across the child capsules, we replace the transformation matrix W c 1 j in Eq. (10) with W c 2 j .
Fully Connected Capsule Layer
The capsules in the layer below are flattened into a list of capsules and fed into fully connected capsule layer in which capsules are multiplied by transformation matrix W d 1 ∈ R E×d×d or W d 2 ∈ R H×E×d×d followed by routing-by-agreement to produce final capsule v j ∈ R d and its probability a j ∈ R for each category. Here, H is the number of child capsules in the layer below, E is the number of categories plus an extra orphan category.
The Architectures of Capsule Network
We explore two capsule architectures (denoted as Capsule-A and Capsule-B) to integrate these four Capsule-B Capsule-A starts with an embedding layer which transforms each word in the corpus to a 300-dimensional (V = 300) word vector, followed by a 3-gram (K 1 = 3) convolutional layer with 32 filters (B = 32) and a stride of 1 with ReLU non-linearity. All the other layers are capsule layers starting with a B × d primary capsule layer with 32 filters (C = 32), followed by a 3 × C × d × d (K 2 = 3) convolutional capsule layer with 16 filters (D = 16) and a fully connected capsule layer in sequence.
Each capsule has 16-dimensional (d = 16) instantiated parameters and their length (norm) can describe the probability of the existence of capsules. The capsule layers are connected by the transformation matrices, and each connection is also multiplied by a routing coefficient that is dynamically computed by routing by agreement mechanism.
The basic structure of Capsule-B is similar to Capsule-A except that we adopt three parallel networks with filter windows (N ) of 3, 4, 5 in the N -gram convolutional layer (see Figure 2). The final output of the fully connected capsule layer is fed into the average pooling to produce the final results. In this way, Capsule-B can learn more meaningful and comprehensive text representation.
Experimental Datasets
In order to evaluate the effectiveness of our model, we conduct a series of experiments on six bench-marks including: movie reviews (MR) (Pang and Lee, 2005), Stanford Sentiment Treebankan extension of MR (SST-2) (Socher et al., 2013), Subjectivity dataset (Subj) (Pang and Lee, 2004), TREC question dataset (TREC) (Li and Roth, 2002), customer review (CR) (Hu and Liu, 2004), and AG's news corpus (Conneau et al., 2017). These benchmarks cover several text classification tasks such as sentiment classification, question categorization, news categorization. The detailed statistics are presented in Table 1
Implementation Details
In the experiments, we use 300-dimensional word2vec (Mikolov et al., 2013) vectors to initialize embedding vectors. We conduct mini-batch with size 50 for AG's news and size 25 for other datasets. We use Adam optimization algorithm with 1e-3 learning rate to train the model. We use 3 iteration of routing for all datasets since it optimizes the loss faster and converges to a lower loss at the end.
Quantitative Evaluation
In our experiments, the evaluation metric is classification accuracy. We summarize the experimental results in Table 2. From the results, we observe that the capsule networks achieve best results on 4 out of 6 benchmarks, which verifies the effectiveness of the capsule networks. In particular, our model substantially and consistently outperforms
Ablation Study
To analyze the effect of varying different components of our capsule architecture for text classification, we also report the ablation test of the capsule-B model in terms of using different setups of the capsule network. The experimental results are summarized in Table 5. Generally, all three proposed dynamic routing strategies contribute to the effectiveness of Capsule-B by alleviating the disturbance of some noise capsules which may contain "background" information such as stop words and the words that are unrelated to specific categories.
Single-Label to Multi-Label Text Classification
Capsule network demonstrates promising performance in single-label text classification which assigns a label from a predefined set to a text (see Table 2). Multi-label text classification is, however, a more challenging practical problem. From singlelabel to multi-label (with n category labels) text classification, the label space is expanded from n to 2 n , thus more training is required to cover the whole label space. For single-label texts, it is practically easy to collect and annotate the samples. However, the burden of collection and annotation for a large scale multi-label text dataset is generally extremely high. How deep neural networks (e.g., CNN and LSTM) best cope with multi-label text classification still remains a problem since obtaining large scale of multi-label dataset is a timeconsuming and expensive process. In this section, we investigate the capability of capsule network on multi-label text classification by using only the single-label samples as training data. With feature property as part of the information extracted by capsules, we may generalize the model better to multi-label text classification without an over extensive amount of labeled data.
The evaluation is carried on the Reuters-21578 dataset (Lewis, 1992). This dataset consists of 10,788 documents from the Reuters financial newswire service, where each document contains either multiple labels or a single label. We reprocess the corpus to evaluate the capability of capsule networks of transferring from single-label to multi-label text classification. For dev and training, we only use the single-label documents in the Reuters dev and training sets. For testing, Reuters-Multi-label only uses the multi-label documents in testing dataset, while Reuters-Full includes all documents in test set. The characteristics of these two datasets are described in Table 3.
Following (Sorower, 2010), we adopt Micro Averaged Precision (Precision), Micro Averaged Recall (Recall) and Micro Averaged F1 scores (F1) as the evaluation metrics for multi-label text classification. Any of these scores are firstly computed on individual class labels and then averaged over all classes, called label-based measures. In addition, we also measure the Exact Match Ratio (ER) which considers partially correct prediction as incorrect and only counts fully correct samples.
The experimental results are summarized in Table 4. From the results, we can observe that the capsule networks have substantial and significant improvement in terms of all four evaluation metrics over the compared baseline methods on the test sets in both Reuters-Multi-label and Reuters-Full datasets. In particular, larger improvement is achieved on Reuters-Multi-label dataset which only contains the multi-label documents in the test set. This is within our expectation since the capsule network is capable of preserving the instantiated parameters of the categories trained by singlelabel documents. The capsule network has much stronger transferring capability than the conventional deep neural networks. In addition, the good results on Reuters-Full also indicate that the capsule network has robust superiority over competitors on single-label documents.
Connection Strength Visualization
To visualize the connection strength between capsule layers clearly, we remove the convolutional capsule layer and make the primary capsule layer followed by the fully connected capsule layer directly, where the primary capsules denote N-gram phrases in the form of capsules. The connection strength shows the importance of each primary capsule for text categories, acting like a parallel attention mechanism. This should allow the capsule networks to recognize multiple categories in the text even though the model is trained on singlelabel documents. Due to space reasons, we choose a multilabel document from Reuters-Multi-label test set whose category labels (i.e., Interest Rates and Money/Foreign Exchange) are correctly predicted (fully correct) by our model with high confidence (p > 0.8) to report in Table 6. The categoryspecific phrases such as "interest rates" and "foreign exchange" are highlighted with red color. We use the tag cloud to visualize the 3-gram phrases for Interest Rates and Money/Foreign Exchange categories. The stronger the connection strength, the bigger the font size. From the results, we observe that capsule networks can correctly recognize and cluster the important phrases with respect to the text categories. The histograms are used to show the intensity of connection strengths between primary capsules and the fully connected capsules, as shown in Table 6 (bottom line). Due to space reasons, five histograms are demonstrated. The routing procedure correctly routes the votes into the Interest Rates and Money/Foreign Exchange categories.
To experimentally verify the convergence of the routing algorithm, we also plot learning curve to show the training loss over time with different iterations of routing. From Figure 3, we observe that the Capsule-B with 3 or 5 iterations of routing optimizes the loss faster and converges to a lower loss at the end than the capsule network with 1 iteration.
Related Work
Early methods for text classification adopted the typical features such as bag-of-words, n-grams, and their TF-IDF features (Zhang et al., 2008) Interest rates on the London money market were slightly firmer on news U.K. Chancellor of the Exchequer Nigel Lawson had stated target rates for sterling against the dollar and mark, dealers said. They said this had come as a surprise and expected the targets, 2.90 marks and 1.60 dlrs, to be promptly tested in the foreign exchange markets. Sterling opened 0.3 points lower in trade weighted terms at 71.3. Dealers noted the chancellor said he would achieve his goals on sterling by a combination of intervention in currency markets and interest rates. Operators feel the foreign exchanges are likely to test sterling on the downside and that this seems to make a fall in U.K. Base lending rates even less likely in the near term, dealers said. The feeling remains in the market, however, that fundamental factors have not really changed and that a rise in U.K. Interest rates is not very likely. The market is expected to continue at around these levels, reflecting the current 10 pct base rate level, for some time.
Orphan Mergers/Acquisitions Money/Foreign Exchange Trade Interest Rates
Recent advances in deep neural networks and representation learning have substantially improved the performance of text classification tasks. The dominant approaches are recurrent neural networks, in particular LSTMs and CNNs. (Kim, 2014) reported on a series of experiments with CNNs trained on top of pre-trained word vectors for sentence-level classification tasks. The CNN models improved upon the state of the art on 4 out of 7 tasks. (Zhang et al., 2015) offered an empirical exploration on the use of character-level convolutional networks (Convnets) for text classification and the experiments showed that Convnets outperformed the traditional models. (Joulin et al., 2016) proposed a simple and efficient text classification method fastText, which could be trained on a billion words within ten minutes. (Conneau et al., 2017) proposed a very deep convolutional networks (with 29 convolutional layers) for text classification. (Tai et al., 2015) generalized the LSTM to the tree-structured network topologies (Tree-LSTM) that achieved best results on two text classification tasks.
Recently, a novel type of neural network is proposed using the concept of capsules to improve the representational limitations of CNN and RNN. Hinton et al. (2011) firstly introduced the concept of "capsules" to address the representational limitations of CNNs and RNNs. Capsules with transformation matrices allowed networks to automatically learn part-whole relationships. Consequently, Sabour et al. (2017) proposed capsule networks that replaced the scalar-output feature detectors of CNNs with vector-output capsules and max-pooling with routing-by-agreement. The capsule network has shown its potential by achieving a state-of-the-art result on MNIST data. Unlike max-pooling in CNN, however, Capsule net-work do not throw away information about the precise position of the entity within the region. For lowlevel capsules, location information is placecoded by which capsule is active. (Xi et al., 2017) further tested out the application of capsule networks on CIFAR data with higher dimensionality. (Hinton et al., 2018) proposed a new iterative routing procedure between capsule layers based on the EM algorithm, which achieves significantly better accuracy on the smallNORB data set. (Zhang et al., 2018) generalized existing routing methods within the framework of weighted kernel density estimation. To date, no work investigates the performance of capsule networks in NLP tasks. This study herein takes the lead in this topic.
Conclusion
In this paper, we investigated capsule networks with dynamic routing for text classification. Three strategies were proposed to boost the performance of the dynamic routing process to alleviate the disturbance of noisy capsules. Extensive experiments on six text classification benchmarks show the effectiveness of capsule networks in text classification. More importantly, capsule networks also show significant improvement when transferring single-label to multi-label text classifications over the co baseline methods.
|
v3-fos-license
|
2020-09-16T13:06:17.860Z
|
2020-09-14T00:00:00.000
|
221719950
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0239264&type=printable",
"pdf_hash": "ebb85375c3747e065f4feb78df3b0850739d8d24",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:859",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "10c36ae54e9655337426a1503a53606e6aef851a",
"year": 2020
}
|
pes2o/s2orc
|
Genetic analysis of Cryptozona siamensis (Stylommatophora, Ariophantidae) populations in Thailand using the mitochondrial 16S rRNA and COI sequences
Cryptozona siamensis, one of the most widespread land snails, is native to Thailand, and plays a key role as an agricultural pest and intermediate host for Angiostrongylus spp. However, its genetic diversity and population structure has not yet been investigated, and are poorly understood. Therefore, a genetic analysis of the C. siamensis population in Thailand was conducted, based mitochondrial 16S rRNA (402 bp) and COI (602 bp) gene fragment sequences. Cryptozona siamensis randomly collected from 17 locations in four populations across Thailand, between May 2017 and July 2018. Fifty-eight snails were used to examine the phylogeny, genetic diversity, and genetic structure. The maximum likelihood tree based on the 16S rRNA and COI fragment sequences revealed two main clades. A total of 14 haplotypes with 44 nucleotide variable sites were found in the 16S rRNA sequences, while 14 haplotypes with 57 nucleotide variable sites were found in the COI sequences. The genetic diversity of C. siamensis in term of the number of haplotypes and haplotype diversity, was found to be high but the nucleotide diversity showed low levels of genetic differentiation for the COI sequence as also noted with the 16S rRNA sequence. The population genetic structure of C. siamensis revealed genetic difference in most populations in Thailand. However, low genetic difference in some populations may be due to high gene flow. This study provides novel insights into the basic molecular genetics of C. siamensis.
Introduction
The land snail Cryptozona siamensis, a terrestrial pulmonate gastropod, belongs to the family Ariophantidae [1,2]. Cryptozona siamensis have been reported as the intermediate host of Angiostrongylus that cause eosinophilic meningitis in human worldwide especially China, Taiwan, and Thailand [3,4]. Cryptozona siamensis are important as intermediate hosts that promote the endemicity and transmission of Angiostrongylus cantonensis and A. malaysiensis [3,5,6]. The distribution of the snail hosts facilitates establishment of the life cycle of the parasite. In addition, the distribution of the land snails accelerates the spread of A. cantonensis [7,8]. Ingestions of raw or undercooked infected snails, snails accidentally chopped up in vegetables, vegetable juices, salads, or foods contaminated by the slime of infected snails are highly risk factors for infection with A. cantonensis in human [9]. Cryptozona siamensis is native to Thailand, and is regarded as a cosmopolitan species, being one of the most widespread land snails in Southeast Asia [2,10], with reports of C. siamensis from areas adjacent to Thailand, such as Malaysia, Singapore, and Laos [6,11,12]. Cryptozona siamensis has gained attention as an important agricultural and horticultural pest, in India, the United States of America, and Thailand [13,14].
In the last few years, DNA sequencing data has been used to study and clarify the evolution of morphological characteristics of ambiguous organisms [15][16][17][18] while genetic studies of land snails using DNA sequence data could be valuable for their identification [19][20][21]. The most commonly used genes for genetic analysis in land snails are the mitochondrial cytochrome c oxidase subunit I (COI) and 16S rRNA genes [20,[22][23][24]. The COI genes have a greater range of phylogenetic information, compared with other mitochondrial genes, and are considered to be robust evolutionary markers for studies of inter-specific relationships [25,26]. In addition, 16S rRNA gene has a high level of inter-specific polymorphisms. Hence, the 16S rRNA has been widely used in the genetic studies of snails [25,27]. The genetic structure of the land snails has been studied from several geographical regions. In Hawaii, analysis of COI and 16S rRNA genes by haplotype networks, gene tree topologies, pairwise molecular divergence, and F ST matrices revealed the substantial geographic genetic structuring and complex dispersal patterns of the land snail Succinea caduca [28]. In China, Zhou et al. studied population genetic structure of the land snail Camaena cicatricosa from 20 locations using mitochondrial gene (COI and 16S rRNA) and internal transcribed spacer (ITS2) sequences. This showed significant fixation indices of genetic differentiation and high gene flow among most populations [29]. In Thailand, the genetic variation of the COI in Achatina fulica was found to be low [30]. In Langkawi island of Malaysia, low genetic diversity of family Ariophantidae (Cryptozona siamensis and Sarika resplendens) and Dyakiiaea (Quantula striata) was also noted by using 16S rRNA gene [11].
Molecular population phylogeographic studies can offer information on specific genetic variations, population formations, and genetic structure [31]. Moreover, they can also help to identify how a population has been affected by various factors, including the ecological environment, climate, human activities, and geographical conditions [32][33][34]. However, the genetic diversity and structure of C. siamensis is currently poorly understood, as it has been the focus of only a limited number of studies. The genetic structure of C. siamensis from 3 populations of Thailand and additional one population from Malaysia was studied on allozyme variation on horizontal starch gel electrophoresis [2]. Although the genetic structure of C. siamensis was examined in previous study, no nucleotide sequencing of this land snail is available in the country. Therefore, the objective of this research was to investigate the genetic diversity and genetic structure of C. siamensis from Thailand, based on mitochondrial DNA-sequence variation at the COI and 16S rRNA loci.
Ethic and biosafety statement
The experimental protocol for the use of animals (snail intermediate host) in this study was approved by the Center for Animal Research of Naresuan University (Project Ethics Approval No: NU-AQ610711). The biosafety protocol was approved by the Naresuan University Institutional Biosafety Committee (Project Approval No: NUIBC MI 61-08-50).
Collection and preliminary identification of the snails
During a survey for A. cantonensis, which uses snails as an intermediate host, C. siamensis specimens were randomly collected from 17 locations across Thailand, between May 2017 and July 2018 (Fig 1). The snail populations (A, B, C, and D) were defined according to the biogeographical regions ( Table 1). The snails were located in several types of natural habitats, such as under or above trunks of fallen trees, under stones, in the pot of flower, and in wall crevices. The snails were collected by hand picking and then put in a net with air for transporting to the Department of Microbiology and Parasitology, Faculty of Medical Science, Naresuan University, Phitsanulok, Thailand. All snails were cleaned with tap water and were preliminary identified according to the previously recorded morphological description of C. siamensis [12]. These included having medium sized shell (> 30 mm in shell width) that was two-toned in color, a surface with fine reticulate sculptures a discoidal shell with a low spire, light straw colored ventral part, light brown dorsal part, a fine surface with fine dense axial grooves, and a smooth ventral surface, and is slightly shiny [12]. The body of each C. siamensis specimen was removed from its shell, and approximately 25 mg of foot tissue was removed and preserved at -20˚C for subsequent DNA extraction. To detect the Angiostrongylus larvae, the remaining snail tissue was artificially digested with a 0.7% (w/v) pepsin solution, as previously described [3]. Fortunately, all the C. siamensis samples in this study were found to be negative for Angiostrongylus larvae.
Genomic DNA extraction
Genomic DNA from each individual C. siamensis was extracted using a Tissue & Cell Genomic DNA Purification Kit (GMbiolab Co., Ltd. Taichung, Taiwan), according to the manufacturer's instructions. An aliquot of the DNA solution was checked by running it on a 0.8% (w/v) agarose gel, in a 1 × TBE buffer, at 100 V. The gel was stained with ethidium bromide, followed by a destaining with distilled water, and photographed under UV light. The rest of the DNA solution was kept at -20˚C for later use as the PCR template.
Polymerase chain reaction (PCR) and sequencing
The DNA fragment (500 bp) of the 16S rRNA gene was amplified by PCR using the 16Sar (5 0 -CGCCTGTTTATCAAAAACAT-3 0 ) and 16Sbr (5 0 -CCGGTCTGAACTCAGATCACGT-3 0 ) primer [35]. The PCR amplifications were performed in a 30 μl total volume, containing 15 μl of EconoTaq 1 PLUS 2 × Master mix (1×; Lucigen Corporation, Middleton, WI, USA), 1.5 μl of 5 μM of each primer (0.25 μM), 9 μl of distilled water, and 3 μl of the DNA template (20-200 ng). Thermal cycling for the 16S rRNA PCR amplification was performed at 96˚C for 2 min, followed by 35 cycles of 94˚C for 30 s, 45˚C for 1 min, and 72˚C for 2 min, and then a final 72˚C for 5 min [36]. The 710 bp segment of the COI gene was amplified using the LCO1490 (5 0 -GGTCAACAAATCATAAAGATATTGG-3 0 ) and HCO2198 (5 0 -TAAACTTCAG GGTGACCAAAAAATCA-3 0 ) primer [37] as described for the 16S rRNA except the thermal cycling used annealing and extension time of 40 s and 90 s, respectively. All PCR amplifications were conducted in a Biometra TOne Thermal cycler (Analytik Jena AG, Jena, Germany). The amplified products were analyzed by 1.2% (w/v) agarose gel-electrophoresis at 100 V, stained with ethidium bromide, destained with distilled water, and visualized and photographed under UV light. The PCR products were then purified using a NucleoSpin 1 Gel and a PCR Clean-up kit (Macherey-Nagel, Germany), according to the manufacturer's instructions. An aliquot of the purified PCR product was checked by 1.2% (w/v) agarose gel electrophoresis as above, while the rest was used as the template for commercial sequencing (Macrogen Inc., Seoul, Korea) in both the forward and reverse directions using the same primers as in the PCR.
Sequence and phylogenetic analysis
The nucleotide sequences were edited by viewing the peaks of the chromatogram in the Seq-Man II software (DNASTAR, Madison, WI, USA). Phylogenetic analysis, including species identification (conversion of molecular operational taxonomic unit to likely species designation) of the Cryptozona was performed by BLASTn searching the NCBI database (http://blast. ncbi.nlm.nih.gov/Blast.cgi) an aligning the obtained homologous nucleotide sequences using ClustalW. Phylogenies were estimated using maximum likelihood (ML) with the general time reversible model, neighbor joining (NJ) with Kimura two-parameter (K2), and maximum parsimony (MP) with the subtree-pruning-regrafting (SPR) and node support values, based on 1000 bootstrap replicates, in the MEGA Version 7.0 program [38]. In addition, Bayesian inference (BI) analysis was performed using MrBayes version 3.2 [39], where the tree space was explored, using four chains for each run of a Markov chain Monte Carlo algorithm (MCMC). The BI analysis was run for 10 000 000 generations and sampled every 100 generations. The last 10 000 trees were used for the Bayesian posterior probabilities (bpp) with a burnin of 90 001 samples, as previous reported [40]. Although, four methods were used for constructing a phylogeny, only ML topology was showed in the present study. The bootstrap values from three methods and the percentage of Bayesian posterior probabilities were indicated on the branch of ML tree.
Genetic analysis
Haplotype diversity and nucleotide diversity were calculated in ARLEQUIN, version 3.5.1.2 [41]. The relationships among the haplotypes were estimated using the median joining (MJ) network [42]. The MJ network analysis was performed in NETWORK, version 5.0.1.1, based on 65 sequences of 16S rRNA (includes 7 sequences from GenBank) and 58 sequences of COI sequences. The genetic differentiation among the populations from each region, was calculated in ARLEQUIN, based on pairwise F ST . Analysis of molecular variance (AMOVA) performed in ARLEQUIN was used to test the genetic difference among groups. Although the data were generated for 16S rRNA and COI gene fragments, for most of the populations from the different regions, the substitution rate was higher for the COI, making it the more appropriate and informative population level marker. Thus, the COI data were the focus of the population genetic structure analyses.
Molecular identification of Cryptozona siamensis
To identify the Cryptozona species, 58 individual land snails (19 samples from population A, 21 samples from population B, 9 samples from population C, and 9 samples from population D) were randomly selected for genetic studies. PCR-based analysis and sequencing of their 16S rRNA and COI regions, was performed, together with a BLASTN search of the edited sequences. Based on the 402 bp of the 16S rRNA gene, all 58 sequences (GenBank accession nos. MK858467-MK858524) in this study showed high identity (96 to 99%) with those of Gen-Bank accession no. JQ728565, which is in agreement with the identification of the taxa in the present study as C. siamensis.
Phylogeny of Cryptozona siamensis
A phylogenetic tree was constructed using the ML, NJ, MP, and BI methods, and the 58 sequences, based on their 16S rRNA genes, were divided into two main clades. Based on the topology of ML tree (Fig 2), clade 1 (from all populations) contained 55 of the sequences, and they were sourced from Phitsanulok, Phetchabun, Pathum Thani, Nakhon Pathom, Loei, Nong Bua Lam Phu, Chaiyaphum, Maha Sarakham, Buri Ram, Chiang Rai, Chiang Mai, Nan, Uttaradit, Chumphon, Surat Thani, and Pattani Provinces. These sequences were closely related to 7 sequences of C. siamensis from Malaysia with 97% of Bayesian posterior probability. Clade 2 contained 3 sequences from the Chon Buri Province (Fig 2), with 100% bootstrap support values for each ML and NJ method. The intraspecific distances among the samples were 0.0-5.4% (S1 Table).
The phylogenetic analysis based on the COI sequences (602 bp), for the 58 C. siamensis samples (GenBank accession nos. MK858409-MK858466), revealed 2 main clades (Fig 3). Clade 1 contained 55 sequences with highest bootstrap support values for ML and NJ of 100% and 100%, respectively. Clade 2 contained 3 sequences where the bootstrap support values for ML, NJ, and MP were 100% for each method. Intraspecific distances among the samples were 0.0-7.5% (S2 Table).
Mitochondrial DNA sequence variation
The mitochondrial 16S rRNA gene (402 bp) was obtained from 58 sequences of C. siamensis in Thailand and 7 sequences from Malaysia in 4 populations (A, B, C, and D). Fourteen haplotypes (16S1-16S14) were identified with 44 nucleotide variation sites (S3 Table). Of these, 9 haplotypes (16S1, 16S2, 16S6, 16S8, 16S9, 16S10, 16S11, 16S13, and 16S14) were unique, and 5 haplotypes were shared by at least two populations (Table 1 and S4 Table). The geographically wide spread haplotype 16S3, is shared between the populations C and D. Haplotype 16S4 was shared between the populations B, C, and D. Haplotype 16S5 was shared between the populations A, B, and C. Haplotype 16S7 was shared between the populations C and D. Haplotype 16S12 was shared between the populations A and B (Figs 1 and 4). The haplotype diversity in each population ranged from 0.6238 in population B to 0.8833 in population D, with a mean of 0.8779. Nucleotide diversity in each population ranged from 0.0088 in population C to 0.0169 in population A, with a mean of 0.0169 (Table 2).
Fourteen haplotypes (CO1-CO14), from the 58 sequences of the 4 populations (A, B, C, and D) in Thailand, were identified, based on the COI genes (602 bp) with nucleotide variations at 57 sites (S5 Table). Of these, 9 haplotypes (CO6, CO7, CO8, CO9, CO10, CO11, CO12, CO13, and CO14) were unique, and 5 haplotypes were shared by at least two populations (Table 1 and S6 Table). The geographically spread haplotype CO1 was shared between the population B, C, and D. Haplotype CO2 was shared between the populations A, B, and C. Haplotype CO3 was shared between the populations A and B. Haplotype CO4 was shared between the populations C and D. Haplotype CO5 was shared between the populations A, C, and D (Figs 1 and 5). The haplotype diversity in each population ranged from 0.6238 in population B to 0.8611 in population D, with a mean of 0.8609. Nucleotide diversity in each population ranged from 0.0083 in population B to 0.0271 in population A, with a mean of 0.0180 (Table 3).
Population genetic structure
Population pairwise F ST values for the 16S rRNA sequences of the C. siamensis, revealed statistically significant differentiation (P < 0.05), whereas between the populations C and D no genetic differences were identified ( Table 4). The population pairwise F ST values based on the COI sequences, between the populations A, B, C, and D were significantly different genetically (Table 5).
Discussion
We have analyzed the genetics of the land snail C. siamensis, based on its mitochondrial 16S rRNA and COI genes. Cryptozona siamensis showed high haplotype diversity (16S rRNA mean haplotype diversity = 0.8779 and the COI gene mean haplotype diversity = 0.8609). The 65 16S rRNA gene sequences, were classified into 14 haplotypes, 9 that were unique, and 5 that were shared between the different populations. The 58 COI sequences were classified into 14 haplotypes, 9 that were unique and 5 that were shared between the different populations. High haplotype diversity (39 mitochondrial haplotypes) was also reported from Camaena cicatricose in China [29]. In contrast, low haplotype diversity (2 COI haplotypes) was found in A. fulica in Thailand [30]. The genetic diversity of the 16S rRNA and the COI genes of the C. siamensis, were different. These results were similar to those from a previous report on the genetic diversity of the 16S rRNA and COI genes of the land snail Cyclophorus fulguratus [40]; as the genetic variation in the 16S rRNA gene was lower than that of the COI gene. This suggested that the 16S rRNA gene in the land snail was a slowly evolving region of the mtDNA [43]. In terms of the evolution rate, the COI gene was superior to the 16S rRNA gene [44], and thus considered to be a reliable phylogenetic marker for the C. siamensis. Many previous reports in land snails have established that the COI gene is a reliable molecular marker for phylogenetic analysis [44][45][46]. In contrast with the haplotype diversity, the nucleotide diversity was low for both the 16S rRNA (mean = 0.0169) and COI (mean = 0.0180) genes for this snail. The low nucleotide diversity of C. siamensis in the present study could be due to small sample sizes. This was consistent with the previous research on the Succinea cadusa snail [28]. The pattern of high haplotype diversity and low nucleotide diversity implies that independent founder events have resulted in multiple unique populations, that each persisted in isolation for sufficient amounts of time, as to allow the accumulation of substitutions through drift [28]. The genetic diversity of the species could be affected by mutation and selection. These effects may play an important role in populations with genetic diversity. In addition, the climate may effect on the population size. This could be observed in other snails such as Camaena cicatricosa and Succinea caduca [28,29]. In addition, the genetic diversity of the Collisella subrugosa snail in Brazil showed variation due to environmental conditions and differing selection pressures [47]. The population genetic structure was examined using the COI and 16S sequence data from 4 populations of Thailand. The substitution rate for the COI gene was high in the C. siamensis from all populations. Previous research has suggested that when the pairwise F ST is greater than 0.15, it implies a high level of genetic differentiation among the population. Whereas, if the pairwise F ST is between 0 and 0.05, it implies a low level of genetic differentiation between the populations [48]. In this study, the population genetic structure of the pairwise F ST values showed a significant genetic differentiation between the most populations, based on the analysis of the 16S rRNA and COI genes. Moreover, the phylogenetic analysis and haplotype network construction showed the lack of clear population genetic structures. This suggests that the gene flow within the C. siamensis population in Thailand might bring about genetic homogeneity [49]. In this study, almost all of the pairwise F ST values, based on the 16S rRNA and COI sequences between the populations, were over 0.15. In contrast, the pairwise F ST based values on the 16S rRNA sequences, showed different results in some populations compared to the COI sequences. This suggested that the 16S rRNA gene had a relatively slow of evolution rate, and a low variability, compared to the COI gene [44,50]. This agreed with the study of Feng et al. (2011), who found more variable nucleotide sites in the COI gene than in the 16S rRNA in the mollusk of the family Pectinidae. Genetic differentiation of C. siamensis population may be affected by the ecological environment, climate barriers, and geographic barrier factors [2,29]. This was supported by previous studies that showed that the ecological environment factor on population differentiation, has been reported in other land snails, such as Camaena cicatricose [29] and Cyclophorus fulguratus [42,51]. In addition, the migration and expansion repeated of population from different areas lead to continuously mix the populations of land snails. It is likely to affect accumulation of high genetic diversity within the population of the species. This was reported in pulmonate snail, Euhadra quaesita [52]. It may be possible reason leading to the genetic different in population of C. siamensis. The genetic homogeneity of C. siamensis in Thailand based on 16S rRNA and COI genes was found among some populations. The results suggested that extensive gene-flow was a possibility. In support of this, considerable levels of gene-flow have been reported among C. siamensis populations in Thailand. Populations of C. siamensis in the areas with a low level of genetic differentiation, exhibit higher levels of gene flow than the populations with a high genetic differentiation [53]. Although this particular land snail has a low dispersal ability, the water, wind, anthropochory, and other factors, can lead to a wider distribution, especially for human activities [54]. Recently, Prasankok and Panha (2011) reported on the allozyme variations in the C. siamensis from the three regions of Thailand, and one region in Malaysia. The population of C. siamensis among the 3 geographic regions (north, central, and south) of Thailand as well as the population in Malaysia, showed a high degree of gene flow. This was consistent with the research that the haplotype network of C. siamensis in Thailand was shared and linked to the haplotype from Malaysia with mutation steps. In the present study, gene flow within the C. siamensis population in Thailand could be possible for several reasons.
Cryptozona siamensis often occurred in habitats associated with human activities, such as vegetable gardens, in flower pots, and in wall crevices of houses. In particular, the transportation of potted plants or vegetables contaminated with snails across provinces may have promoted the movement of snails. Therefore, the possibility of dispersal of C. siamensis among populations in each area by humans could be plausible. Similarly, the African land snail in China, A. fulica, could be spread through shipments of plants especially pot plants [55], as well as the C. cicatricosa in China could be spread through cargo transportation [29]. Therefore, the dispersal of C. siamensis between populations in each region may arise by human activities [2]. The possibility of gene flow may be associated with the anthropochoric effects of snail dispersal [2,56]. In addition, birds perhaps, mediated important in the transport of snails [57]. The movements of the eggs or small snails attached to the birds may have been important in the distributions between each area in regions. Thus, it is suggested that gene flow in the C. siamensis from Thailand may be explained using several possibilities. In addition, transportation by human activities may promote the spread of A. cantonensis which is hosted by this land snail. Similarly, Lv et al. (2009) reported the distribution of A. cantonensis in China associated with invasive land snail, A. fulica [8].
Although no C. siamensis infected with A. cantonensis was found in the present study, previous reports found that C. siamensis was infected with A. cantonensis. In Thailand, C. siamensis infected with A. cantonensis was found in Phetchabun, Kalasin, Phitsanulok, and Kamphaeng Phet provinces [3]. In general, food made from C. siamensis are uncommon dish for people in Thailand. Human may get infection with A. cantonensis by consumption of vegetable contaminated with A. cantonensis larvae [58]. Therefore, the C. siamensis is important to maintain the life cycle of A. cantonensis and may transmit to human.
Conclusion
We have reported on the genetic diversity of the 16S rRNA and the COI genes, from the C. siamensis samples taken in Thailand. The maximum likelihood tree based on the 16S rRNA and COI fragment sequences of C. siamensis revealed two main clades. Most of the sequences fell in clade 1 and 3 samples from Chon Buri Province (population A) was placed in clade 2. The genetic diversity of C. siamensis in term of the number of haplotypes and haplotype diversity, was found to be high but the nucleotide diversity among the different populations of Thailand showed low levels of genetic differentiation for the COI sequence as also noted with the 16S rRNA sequences. Population genetic structure of C. siamensis based on F ST value in 16S rRNA and COI genes was genetically different among most of population exception of some populations. Genetic differentiation in the populations of the C. siamensis may result from the effects of the ecological environment, climate barriers, and geographic barriers. However, low genetic difference in some populations may be due to high gene flow which may occur from the transportation by human activities. In addition, transportation by human may lead to spread of A. cantonensis which is hosted by C. siamensis. This study shows the genetic diversity of the C. siamensis populations across several regions of Thailand, and their interrelatedness.
Supporting information S1
|
v3-fos-license
|
2023-05-30T15:01:24.769Z
|
2023-05-28T00:00:00.000
|
258963610
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/diagnostics13111887",
"pdf_hash": "f31a1b343c9c632e609db4d7a70060218440352e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:860",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "ab238a1e161980d0efe0ee800c3588578c469c9c",
"year": 2023
}
|
pes2o/s2orc
|
A New Hybrid Approach Based on Time Frequency Images and Deep Learning Methods for Diagnosis of Migraine Disease and Investigation of Stimulus Effect
Migraine is a neurological disorder that is associated with severe headaches and seriously affects the lives of patients. Diagnosing Migraine Disease (MD) can be laborious and time-consuming for specialists. For this reason, systems that can assist specialists in the early diagnosis of MD are important. Although migraine is one of the most common neurological diseases, there are very few studies on the diagnosis of MD, especially electroencephalogram (EEG)-and deep learning (DL)-based studies. For this reason, in this study, a new system has been proposed for the early diagnosis of EEG- and DL-based MD. In the proposed study, EEG signals obtained from the resting state (R), visual stimulus (V), and auditory stimulus (A) from 18 migraine patients and 21 healthy control (HC) groups were used. By applying continuous wavelet transform (CWT) and short-time Fourier transform (STFT) methods to these EEG signals, scalogram-spectrogram images were obtained in the time-frequency (T-F) plane. Then, these images were applied as inputs in three different convolutional neural networks (CNN) architectures (AlexNet, ResNet50, SqueezeNet) that proposed deep convolutional neural network (DCNN) models and classification was performed. The results of the classification process were evaluated, taking into account accuracy (acc.), sensitivity (sens.), specificity (spec.), and performance criteria, and the performances of the preferred methods and models in this study were compared. In this way, the situation, method, and model that showed the most successful performance for the early diagnosis of MD were determined. Although the classification results are close to each other, the resting state, CWT method, and AlexNet classifier showed the most successful performance (Acc: 99.74%, Sens: 99.9%, Spec: 99.52%). We think that the results obtained in this study are promising for the early diagnosis of MD and can be of help to experts.
Introduction
Migraine is a neurological disorder that occurs as a result of symptoms originating from the vessels and nerves in the brain [1]. MD is one of the most common neurological diseases [2]. It ranks sixth among the most common diseases in the world [3]. The most common symptoms of MD are severe headaches, nausea, vomiting, and sensitivity to sound and light [4]. A migraine patient may have attacks once or twice a month. For this reason, it is important to be able to diagnose MD. MD can be diagnosed and analyzed by experts based on clinical data. However, the manual interpretation of EEG signals by experts can be cumbersome and time-consuming [5]. For this reason, Computer Aided Diagnosis (CAD) systems that can support experts in this regard are important. CAD systems are computer-based systems that can help experts by making quick decisions. Thanks to CAD • Akben et al. [6] analyzed the EEG signals obtained from migraine patients and the HC group under flash stimulation in their study and detected MD with an accuracy of 85% using the Support Vector Machine (SVM) classifier. • Aslan separated the signals into subbands by applying the Tunable Q-Factor Wavelet Transform (TQWT) method to the EEG signals in his studies on the diagnosis of MD. By extracting features from these subbands, the classification process between MD and HC was performed using the Rotation Forest algorithm. As a result of the classification process, an accuracy rate of 89.6% was obtained [10].
•
In another study, Aslan applied the Empirical Mode Decomposition (EMD) method to EEG signals and separated the signals into subbands. As a result of the classification process using the features extracted from these subbands, an accuracy rate of 92.7% was obtained using a Random Forest (RF) algorithm [15].
•
In another similar study, Subaşı et al., using Discrete Wavelet Transform (DWT) and RF algorithm, distinguished MD from the HC group with an accuracy of 85.95% [16].
•
In a study conducted for clinical support purposes, Yin et al. [17] succeeded in distinguishing between tension-type headaches and migraine with 90% accuracy as a result of the system they developed based on the K-Nearest Neighbors (KNN) algorithm.
•
In terms of the studies that used DL, Göker [14] created feature vectors by applying the Welch method to EEG signals. She used several ML methods and a Bidirectional Long-Short Term Memory (BiLSTM) model in the classification phase. The most successful performance was achieved using the BiLSTM model, as it succeeded in classifying the MD and HC groups with 95.99% accuracy.
In this study, a new EEG-based hybrid system is proposed for the automatic diagnosis of MD using signal processing methods and DL models. This proposed system aimed to determine the system that shows the most successful performance in diagnosing MD by applying different signal processing methods and classifiers. Migraine is one of the most common neurological diseases [2,3]. However, EEG-based studies on migraine are limited [11]. For this reason, our motivation for this study was that there are almost no studies on the diagnosis of MD using EEG signals, especially with DL models [11,14]. We think that DL studies on the diagnosis of migraine disease are scarce and that this study is important as it fills a gap in the literature. In addition, the fact that the data set used in the study is new (2020) and has been used very little until this research has encouraged us to conduct this study. The purpose, summary, and contributions of this study can be explained as follows: 1.
In this study, a new system based on EEG and DL that can support specialists in the automatic and early diagnosis of MD is proposed.
2.
For this purpose, in this study, recorded EEG signals based on the resting state (R), visual stimulus (V), and auditory stimulus (A) from MD and HC groups were analyzed. As a result of the analyses made possible by signal processing methods and DL models, MD and HC groups were able to be classified. This study aims to be original and to contribute to future studies. 3.
In this study, 1-D EEG signals obtained from MD and HC groups were preprocessed and noise-free. The noise-free EEG signals were transformed into scalogramspectrogram images in the time-frequency domain by using the 'CWT and STFT' T-F transform methods. 4.
The classification was carried out by applying the scalogram and spectrogram images of the MD and HC groups to some CNN architectures (AlexNet, ResNet50, SqueezeNet) and to the DCNN model that we created ourselves.
5.
The effect of the stimuli was also examined by performing the classification process in three situations (R-A-V). As a result of the classification process, both the performance comparison of CWT and STFT, which are signal processing methods, and the performance of the DL models used in the study were compared. In this way, the best-performing state, method, and classifier model were determined. Regarding the use of EEG signals, DL, and ML in the diagnosis of MD, as far as we know, this study is the first of its kind compared to similar studies in the literature.
In the Section 2 of the study, information about the dataset, preprocessing step, signal processing methods, and DL models are given. In the Section 3, the results of the study and the interpretation of the results are given. In the Section 4, the results obtained are compared with the results of similar studies, and the contributions of this study to the literature and the limitations of the study are discussed. In the Section 5, conclusions are drawn and considerations for future studies are given.
Methodology
In this part of the study, information is given about the data set and the methods used in the proposed model. In this study, a new EEG-based hybrid system for the automatic diagnosis of MD is proposed as a result of applying signal processing methods and CNN models to EEG signals. The processes applied in the proposed system are summarized below, and a flow chart of the study is shown in Figure 1.
•
In the preprocessing step, the noises in the recorded EEG signals for visual, auditory stimulus, and resting state obtained from the multi-channel were cleaned using a 0.5-40 Hz finite impulse response (FIR) filter. • Scalogram and spectrogram images were created in the T-F plane of the signals by applying the 'CWT and STFT' T-F transform methods for the signal processing of noise-free EEG signals.
•
In this study, scalogram and spectrogram images were applied for the first time to CNN architectures (AlexNet, ResNet50, SqueezeNet) and the proposed DCNN model for three states (R-A-V). The classification process analyzed the MD-HC groups for the three situations and the applied methods. Classification performance criteria (Acc., Sens., and Spec.) ratios were obtained and interpreted for all situations and methods applied in the study.
• Scalogram and spectrogram images were created in the T-F plane of the signals by applying the 'CWT and STFT' T-F transform methods for the signal processing of noise-free EEG signals.
•
In this study, scalogram and spectrogram images were applied for the first time to CNN architectures (AlexNet, ResNet50, SqueezeNet) and the proposed DCNN model for three states (R-A-V). The classification process analyzed the MD-HC groups for the three situations and the applied methods. Classification performance criteria (Acc., Sens., and Spec.) ratios were obtained and interpreted for all situations and methods applied in the study.
Participants and Dataset
The dataset of EEG signals used in this study is was created recently and publicly shared by Carnegie Mellon University [18]. EEG signals were obtained from 21 HC groups (12 females/9 males; all 19-54 years old; mean age 27.9 years) without a headache and 18 migraine patients in the interictal period (13 females/5 males; all 19-54 years old; mean age 27.6) was recorded. Subjects participating in the study were selected according to the criteria of the International Headache Society. EEG signals have a sampling frequency of 512 Hz and were recorded from 128 channels [18]. EEG recordings were obtained using the BioSemi Active Two system. EEG recordings were taken by sending audio-visual stimuli to the subjects according to their resting state. For the visual state, the grid pattern with the contrast adjustment changed at a frequency of 4 Hz or 6 Hz. For the auditory stimulus, auditory tones with a frequency of 4-6 Hz were recorded. In the resting state, they were asked to focus on the fixed plus sign on the screen [19]. In this study, all three situations were analyzed, and the results were compared. Detailed information about the dataset and experimental setup can be found in refs. [18,19].
Signal Preprocessing and Time-Frequency Transform Techniques
In the first stage of this study, the FIR filter (0.5-40 Hz) was preferred in the preprocessing stage to clean the noises in the EEG signals used. FIR filters are easy to implement. It is also widely used due to its linear phase property and frequency stability [20]. In addition, 2 times downsampling was applied to the EEG signals, and the sampling frequency was set to 256 Hz. This helped reduce the processing load. After the preprocessing stage, CWT and STFT from T-F transform methods were applied to the noise-free signals to facilitate the analysis of the EEG signals and capture the details simultaneously. EEG signals contain oscillating and fluctuating frequency components. In order to obtain more information from oscillating and non-stationary signals such as EEG, T-F transform methods are applied to generate T-F representations of the signal. Thanks to these methods, the relationship between the time and frequency properties of the signal can be examined. It
Participants and Dataset
The dataset of EEG signals used in this study is was created recently and publicly shared by Carnegie Mellon University [18]. EEG signals were obtained from 21 HC groups (12 females/9 males; all 19-54 years old; mean age 27.9 years) without a headache and 18 migraine patients in the interictal period (13 females/5 males; all 19-54 years old; mean age 27.6) was recorded. Subjects participating in the study were selected according to the criteria of the International Headache Society. EEG signals have a sampling frequency of 512 Hz and were recorded from 128 channels [18]. EEG recordings were obtained using the BioSemi Active Two system. EEG recordings were taken by sending audio-visual stimuli to the subjects according to their resting state. For the visual state, the grid pattern with the contrast adjustment changed at a frequency of 4 Hz or 6 Hz. For the auditory stimulus, auditory tones with a frequency of 4-6 Hz were recorded. In the resting state, they were asked to focus on the fixed plus sign on the screen [19]. In this study, all three situations were analyzed, and the results were compared. Detailed information about the dataset and experimental setup can be found in refs. [18,19].
Signal Preprocessing and Time-Frequency Transform Techniques
In the first stage of this study, the FIR filter (0.5-40 Hz) was preferred in the preprocessing stage to clean the noises in the EEG signals used. FIR filters are easy to implement. It is also widely used due to its linear phase property and frequency stability [20]. In addition, 2 times downsampling was applied to the EEG signals, and the sampling frequency was set to 256 Hz. This helped reduce the processing load. After the preprocessing stage, CWT and STFT from T-F transform methods were applied to the noise-free signals to facilitate the analysis of the EEG signals and capture the details simultaneously. EEG signals contain oscillating and fluctuating frequency components. In order to obtain more information from oscillating and non-stationary signals such as EEG, T-F transform methods are applied to generate T-F representations of the signal. Thanks to these methods, the relationship between the time and frequency properties of the signal can be examined. It has been stated that images in the T-F plane obtained from non-stationary physiological signals such as EEG can be used with deep learning models [21]. For this reason, scalogram and spectrogram images were obtained in the T-F plane by using the MATLAB software program thanks to the transformation techniques applied in this study. Sample images obtained from the MD and HC groups are given in Figure 2. These images have been adjusted to the appropriate input sizes according to the models in the classification process and made ready for use as data in classification models.
has been stated that images in the T-F plane obtained from non-stationary physiological signals such as EEG can be used with deep learning models [21]. For this reason, scalogram and spectrogram images were obtained in the T-F plane by using the MATLAB software program thanks to the transformation techniques applied in this study. Sample images obtained from the MD and HC groups are given in Figure 2. These images have been adjusted to the appropriate input sizes according to the models in the classification process and made ready for use as data in classification models.
Continuous Wavelet Transform
The CWT method is a suitable and preferred method for the analysis of non-stationary signals that vary with time and scale [22]. The CWT method can provide appropriate time-frequency resolutions and capture transients in the EEG signal with a high temporal resolution [23]. Many wavelets can be used in the CWT method (Morlet-Morse-Bump wavelet). In this study, CWT wavelets were tested, and the Bump wavelet, which gave the most successful result for all three cases, was preferred. As a result of applying the CWT method to EEG signals, scalogram images were obtained. The formula of the CWT method is shown in Equation (1). In the equation, 'x(t)' represents the EEG signal in the time axis, ψ(t) represents the main wavelet, and a and b are the parameters [22,23].
Short-Time Fourier Transform
The STFT method is an improved version of the Fourier method. In this method, the signals in the time domain are divided into blocks and the Fourier transform is evaluated in each block. The STFT method, also known as the windowed Fourier transform, also acts as a symmetric band-pass filter. STFT method is one of the most popular T-F analysis
Continuous Wavelet Transform
The CWT method is a suitable and preferred method for the analysis of non-stationary signals that vary with time and scale [22]. The CWT method can provide appropriate time-frequency resolutions and capture transients in the EEG signal with a high temporal resolution [23]. Many wavelets can be used in the CWT method (Morlet-Morse-Bump wavelet). In this study, CWT wavelets were tested, and the Bump wavelet, which gave the most successful result for all three cases, was preferred. As a result of applying the CWT method to EEG signals, scalogram images were obtained. The formula of the CWT method is shown in Equation (1). In the equation, 'x(t)' represents the EEG signal in the time axis, ψ(t) represents the main wavelet, and a and b are the parameters [22,23].
Short-Time Fourier Transform
The STFT method is an improved version of the Fourier method. In this method, the signals in the time domain are divided into blocks and the Fourier transform is evaluated in each block. The STFT method, also known as the windowed Fourier transform, also acts as a symmetric band-pass filter. STFT method is one of the most popular T-F analysis methods preferred in studies and compared with CWT [24]. As a result of applying the STFT method to EEG signals, spectrogram images were obtained. The STFT transformation is shown in Equation (2). In Equation (2), 'x(t)' represents the signal and 'w(t)' is the window function. The length of the windows for each block is equal, and the x(t) signal is assumed to be stationary within the window time [25]. The spectrogram of the X(t) signal can also be defined as (|X (t, f )| 2 ). (2)
Deep Learning Models
DL is a ML method consisting of neural networks that enable data properties to be learned sequentially [26]. In DL methods, features are learned automatically. In contrast to the use of ML methods, there is no need to pre-extract features. For this reason, DL methods seem superior to ML methods [26]. DL models are of great interest in the classification of EEG signals and in the diagnosis of neurological diseases [11]. CNN is the most widely used of these models. CNN models are preferred in this study because DL models can perform feature selection automatically and generally perform better than ML methods. In the classification phase of this study, AlexNet, ResNet50, and SqueezeNet, which are commonly used CNN architectures, were preferred. In addition to these architectures, the DCNN model we recommend was used in the classification phase. In this way, the performances of the preferred CNN architectures and the proposed DCNN classifier model were compared. Information about the preferred architectures and the proposed DCNN model in the study are explained in Section 2.3.1.
Convolutional Neural Networks and the Proposed DCNN Model
CNN-based models are seen as one of the most popular deep learning techniques. They consist of multiple layers and are used for feature extraction and classification [27]. In general, neural networks consist of an input layer, one or more hidden layers, and an output layer. CNN-based models are DL techniques consisting of network layers and have become popular in recent years for the classification of signals or images and object recognition [28][29][30]. In addition, CNN models are generally seen as the best DL networks and are frequently preferred in the classification of medical images and biomedical signal processing studies [31]. In the CNN method, several parameters may need to be regulated in the architectures we have designed, and this process can be time-consuming. For this reason, in some studies, well-designed CNN architectures such as AlexNet and DenseNet are preferred at the classification stage [32]. Information in a CNN-processed raw image is preserved. In an image applied as an input to the CNN model, the information between the pixels is included in the networks [30,32]. There are many parameters that need to be adjusted when designing the CNN model. CNN generally consists of three layers: the convolution, pooling, and fully connected layers. i.
The convolution layer is the basic building block of the convolutional network and contains filters that are set during the training process. It is the layer responsible for producing the output of each neuron in the input layer. The final output of the convolution layer is a vector [29,32,33]. ii. The pooling layer can protect the network by subsampling the output of the convolution layer. By reducing the amount of parameters and calculations in the network, mismatch in the network is controlled and overfitting can be avoided [28,32,33]. iii. The fully connected layer is where the classification process takes place. Neurons in this layer are associated with all activations in the previous layer [28,32,33].
In this study, AlexNet, ResNet50, and SqueezeNet from CNN architectures were used. Detailed information about these architectures can be found in Ref. [33]. These architectures were created by using the layers accepted in the literature and preferred in studies. In addition to these architectures, a DCNN model, whose layers we created ourselves, is proposed. While creating the DCNN architecture, different layers and parameters were trialed many times. As a result of these trials, the layers and parameters of the DCNN model that gave the most successful result were determined. The DCNN model with the most successful performance was created. The initial learning rate of the model is 0.0001; the max epochs are 12. The mini-batch size was set to 64, and Adam was chosen as the optimizer. In this study, the proposed DCNN model for the detection of migraine disease consists of an Input layer, Convolution layer, ReLU layer, Max Pooling layer, Fully Connected layer, Softmax layer, and Classification layers. The layer information and architecture of the proposed model are given in Figure 3. In the data preparation part, before the classification process, the CWT and STFT equivalents of the EEG signals of 39 individuals (21 HC-18 MD) were obtained from 64 channels, and the data were made ready for the classifier input. The images obtained as a result of T-F transformation techniques and used as data in the classification stage were adjusted to appropriate input sizes according to the models. For the proposed DCNN model, the input image size is set to 256 × 256 × 3. Input sizes for AlexNet and SqueezeNet models are 227 × 227 × 3. For ResNet50, the input dimensions are set to 224 × 224 × 3. As a result of these processes, MD and HC groups were classified.
DCNN model that gave the most successful result were determined. The DCNN model with the most successful performance was created. The initial learning rate of the model is 0.0001; the max epochs are 12. The mini-batch size was set to 64, and Adam was chosen as the optimizer. In this study, the proposed DCNN model for the detection of migraine disease consists of an Input layer, Convolution layer, ReLU layer, Max Pooling layer, Fully Connected layer, Softmax layer, and Classification layers. The layer information and architecture of the proposed model are given in Figure 3. In the data preparation part, before the classification process, the CWT and STFT equivalents of the EEG signals of 39 individuals (21 HC-18 MD) were obtained from 64 channels, and the data were made ready for the classifier input. The images obtained as a result of T-F transformation techniques and used as data in the classification stage were adjusted to appropriate input sizes according to the models. For the proposed DCNN model, the input image size is set to 256 × 256 × 3. Input sizes for AlexNet and SqueezeNet models are 227 × 227 × 3. For ResNet50, the input dimensions are set to 224 × 224 × 3. As a result of these processes, MD and HC groups were classified.
Classification Process and Performance Evaluation Metrics
In this study, the models described in Section 2.3.1 were used to classify the MD and HC groups. In the classification process, images in the T-F plane obtained from the EEG signals as a result of the methods mentioned in Section 2.2 were used as data. Classification stages were carried out using the MATLAB software program. In the classification process, the k-fold cross-validation (CV) technique was applied. In the K-fold CV technique, the data is divided into k equal parts. K-1 of the parts is used to train the model, and the remaining part is used for the testing phase of the model. These stages continue by repeating k times, and the performance of the model is determined by obtaining the average of the results. Thus, possible deviations and errors are minimized. In this study, CV: is set as 5. According to the CV:5 process, 20% of the data was set to be tested and 80% to be trained, and the classification process was carried out. As a result of each fold operation, acc., sens., and spec., which comprise the performance criteria evaluated in the study, were calculated. The performances of the classifier models were calculated by taking the average of these values. The diagram of the CV:5 technique is shown in Figure 4.
Classification Process and Performance Evaluation Metrics
In this study, the models described in Section 2.3.1 were used to classify the MD and HC groups. In the classification process, images in the T-F plane obtained from the EEG signals as a result of the methods mentioned in Section 2.2 were used as data. Classification stages were carried out using the MATLAB software program. In the classification process, the k-fold cross-validation (CV) technique was applied. In the K-fold CV technique, the data is divided into k equal parts. K-1 of the parts is used to train the model, and the remaining part is used for the testing phase of the model. These stages continue by repeating k times, and the performance of the model is determined by obtaining the average of the results. Thus, possible deviations and errors are minimized. In this study, CV: is set as 5. According to the CV:5 process, 20% of the data was set to be tested and 80% to be trained, and the classification process was carried out. As a result of each fold operation, acc., sens., and spec., which comprise the performance criteria evaluated in the study, were calculated. The performances of the classifier models were calculated by taking the average of these values. The diagram of the CV:5 technique is shown in Figure 4. As a result of the classification process, the acc., sens. and spec. ratios, which comprise the performance criteria evaluated in the study, were calculated according to the sample Confusion Matrix given in Figure 5. During the calculation process, true positive (TP), true negative (TN), false positive (FP), and false negative (FN) rates were used. Acc., sens., and spec. calculations are given in Equations (3)-(5).
•
The TP is the number of data predicted by the model in the MD class that is actually in the MD class. As a result of the classification process, the acc., sens. and spec. ratios, which comprise the performance criteria evaluated in the study, were calculated according to the sample (TP), true negative (TN), false positive (FP), and false negative (FN) rates were used. Acc., sens., and spec. calculations are given in Equations (3)-(5).
•
The TP is the number of data predicted by the model in the MD class that is actually in the MD class. • FP is data that does not actually belong to the MD class but that the model mistakenly predicts to belong to the MD class. • TN is the number of data that is actually in the HC group, correctly predicted by the model as belonging to the HC group. • FN is the number of data that actually belongs to the MD class but is incorrectly predicted by the model as belonging to the HC group.
Experimental Results
Manual analysis of non-stationary physiological signals such as EEG can be difficult. While analyzing these signals using traditional methods, steps such as feature extraction, feature selection, and classification are required [34]. These steps can be laborious and time-consuming. To alleviate this, DL models that can automatically extract features and perform classification are preferred. For this reason, several CNN architectures and the DCNN model we created were preferred in this study. In this study, a new hybrid system
•
The TP is the number of data predicted by the model in the MD class that is actually in the MD class. • FP is data that does not actually belong to the MD class but that the model mistakenly predicts to belong to the MD class. • TN is the number of data that is actually in the HC group, correctly predicted by the model as belonging to the HC group. • FN is the number of data that actually belongs to the MD class but is incorrectly predicted by the model as belonging to the HC group.
Experimental Results
Manual analysis of non-stationary physiological signals such as EEG can be difficult. While analyzing these signals using traditional methods, steps such as feature extraction, feature selection, and classification are required [34]. These steps can be laborious and time-consuming. To alleviate this, DL models that can automatically extract features and perform classification are preferred. For this reason, several CNN architectures and the DCNN model we created were preferred in this study. In this study, a new hybrid system based on EEG signal and DL model, which can support experts by providing an automatic diagnosis of MD, is proposed. In the proposed system, auditory-visual stimuli from 18 MD and 21 HC groups and EEG signals recorded according to their resting state were used. The noises of these signals were cleaned in the preprocessing stage. Scalogram and spectrogram images in the T-F plane were obtained by applying CWT and STFT and the T-F transform methods to noiseless 1-D EEG signals. We aimed to capture the transient moments of non-stationary EEG signals by providing high-resolution images. The images obtained as a result of T-F transformation techniques and used as data in the classification phase were adjusted to the appropriate input sizes according to the classifier models. Then, these data were applied as inputs into AlexNet, SqueezeNet, ResNet50, and suggested DCNN models from CNN architectures, and classification was performed. A total of 2496 scalogramspectrogram images obtained from 64 channels from 39 participants were used in the classification process. MD and HC groups were classified by applying the procedures described in Section 2.4.
The classification was performed separately for the CWT, STFT methods, and three states (R-A-V). In this way, while the performances of the models used in the classification were compared, the performance of both the methods and the three states were also compared. As a result of the classification process, acc., sens., and spec. values were obtained and interpreted. The results obtained with the CWT and STFT methods and DL models for the resting state are given in Table 1. While the results of the auditory stimulus status are given in Table 2, the results of the visual stimulus status are given in Table 3. This study should be considered both as a comparison of methods and as a comparison of the states of resting, auditory, and visual stimuli. For the comparison of the methods used in this study, the performance of CWT and STFT methods in DL models was examined. In the same way, the performance results in the DL models were obtained by examining the stimulus states separately. In this way, the most successful method, classifier, and state were determined.
If we consider the performances of the methods used in the study, when Tables 1-3 were examined, it was observed that the CWT method was more successful than the STFT metho-according to the classifier performance criteria. The CWT method performed slightly better than the STFT method in all classifiers preferred in the study. In the CWT method, the highest accuracy rate was obtained in the AlexNet classifier at the resting state (Acc: 99.74%), while in the STFT method, a resting state of (Acc: 99.32%) was obtained with the recommended DCNN model.
If we consider the study according to the state of resting, auditory, and visual stimuli, according to Tables 1-3, the most successful results were obtained at resting state in all classifier models. While the most successful results after the resting state were obtained in the auditory stimulus situation, the partially less successful ones were in the visual stimulus situation. According to the resting state, the most successful results were obtained in the CWT method with the AlexNet classifier (Acc: 99.74%, Sens: 99.9%, Spec: 99.52%). Regarding the state of auditory stimuli, the most successful results were obtained with the DCNN model recommended in the CWT method (Acc: 99.44%, Sens: 99.04%, Spec: 99.74%). The CWT method and DCNN model showed the most successful performance with respect to the state of visual stimuli (Acc: 98.96%, Sens: 98.24%, Spec: 99.5%).
Discussion
In this study, a new system based on EEG signal and DL is proposed for the effective and early diagnosis of migraine disease. In the proposed system, images were created in the T-F plane by applying CWT and STFT methods to EEG signals. It has been stated that images can be obtained in the T-F plane from biomedical signals and can be evaluated together with DL models to yield successful results [21]. Studies have also conducted evaluations using CWT and STFT methods [34]. For this reason, CWT and STFT methods were used in this study, and their performances were compared by evaluating them in CNN models. Unlike in ML methods, steps such as feature extraction or feature selection are performed automatically in DL models [11,35]. In this way, faster results can be obtained compared to ML methods. For this reason, three different CNN architectures and the DCNN model that we created were used in the classification stage of this study. As a result of the study, the state, method, and classifier model that showed the most successful performance were determined. For this reason, in addition to the AlexNet, SqueezeNet, ResNet50, and CNN models that are frequently preferred in the studies, the DCNN model, whose layers and parameters we adjusted ourselves, was used in the classification phase of the study. Looking at the results in Tables 1-3, it is clear that the proposed DCNN model performs successfully and provides accurate results. The proposed model can be improved with different layers or parameters, but we think that it is suitable for similar studies as it is.
EEG-based DL studies are promising and such studies have become increasingly widespread in recent years. It has been stated in the literature that DL-based CAD systems are widely used for the diagnosis of many diseases [35]. It can be seen in the recently published literature that successful studies on neurological diseases make use of EEG signals [22,27,30,34]. However, studies on MD diagnosis using EEG signals with ML and especially DL models are scarce [14,35,36]. Studies on MD diagnosis using EEG signals and DL models seem to be lacking and new studies are needed [11,37]. We reviewed the recent studies on the diagnosis of MD based on EEG signals and ML-DL and compared their results with the results obtained in this study, as seen in Table 4. As can be seen in Table 4, studies using ML are more common than DL-based studies. Upon examining studies that diagnose MD based on EEG signals and ML [6,10,15,16,36], we identified that some features are extracted from EEG signals and evaluated in ML methods. Among these studies, Aslan [15] achieved the most successful performance in his study which involved a EMD method and RF classifier (Acc: 92.47). Regarding DL, Göker [14] classified MD and HC groups with 95.99% accuracy. When we look at the studies in Table 4, it is clear that there is only one study on visual stimulus, in which EEG signals at rest were mostly used [6]. As far as we know, no such study has been conducted on auditory stimuli. In this study, EEG signals recorded depending on the resting state, visual stimulus, and auditory stimulus were used. In this way, the most successful method and classifier model were determined while the stimulus effect was also examined. As far as we know, this study is the first of its kind. According to the results in Table 4, it is clear that this study performed more successfully than similar studies in the literature. The positive aspects of this study are as follows: 1.
We think that this study is very comprehensive. In this study, besides the EEG signal and DL model-based automatic diagnosis of MD, the effect of three conditions (R-A-V) was also investigated. In addition, a single T-F method was not used. Their performances were compared by applying CWT and STFT methods, both of which are widely preferred in other studies. In addition to the CNN architectures that are frequently used in studies in the literature, the performances of these classifiers were compared by creating our own DCNN model. To our knowledge, we think that this study is the first in the literature do this.
2.
Although EEG-and DL-based studies have been conducted on the diagnosis of MD [14,35], this study is the first of its kind. From our research, it became clear that there are few studies on the diagnosis of migraine disease, with DL-based studies being especially lacking. Due to this, we think that this study is important in terms of filling this gap in the literature.
3.
We consider it an advantage that the dataset used in this study is new and has not been used much.
5.
It is known that the CWT method gives more detailed features than other T-F methods and is preferred in other studies [35]. Upon examining the results obtained in this study, the CWT method was found to be more successful, which is in alignment with the existing literature. 6.
It has been seen that the DCNN model proposed in the study gives close results or is partially more successful with the CNN architectures that are widely preferred in the literature. We think that the proposed model can be evaluated in future studies on different migraine data or on the diagnosis of neurological diseases. 7.
As far as we know, this study is the first study regarding EEG signals and DL-based diagnosis of MD based on the resting state and visual and auditory stimuli. According to the results obtained in this study (Tables 1-3), we think that the proposed system has potential in the diagnosis of MD.
In addition to the positive aspects of this study, we think that there are also some limitations. These limitations are as follows: 1.
Studies on MD diagnosis using EEG signals and DL models are very scarce. For this reason, there have not been many studies in which we can compare the results obtained in this study.
2.
EEG-, ML-, and DL-based studies on the diagnosis of MD are scarce, and there is no such study on the stimulus effect as far as we know. For this reason, although the results we obtained in this study are promising, there is no study in which we can compare the stimulus effect.
3.
We think that the number of data used in the study was sufficient. However, more data could have improved our results.
4.
The method used in the study and the proposed model could not be tested because there was no other migraine data. The performance of the proposed method and model can be compared by using different migraine data in the future.
Conclusions and Future Work
Although migraine is one of the most common neurological diseases, studies on migraine are lacking. Especially EEG signal-and DL-based studies on MD diagnosis are very few. For this reason, new studies are needed. One of our biggest motivations for conducting this study was that very few studies of this type exist. Early diagnosis of MD can be difficult and time-consuming for specialists. For this reason, this study aimed to propose an EEG-and DL-based system that can support specialist physicians for the automatic and early diagnosis of MD. For this purpose, EEG signals of MD and HC groups were examined depending on three conditions. In the study, two different T-F methods were applied and their performances were compared. In the classification phase of the study, their performances were compared using three different CNN architectures and the DCNN model we suggested. In this comprehensive study, stimulus state, method, and a classifier model were used to determine the most successful performance. The results obtained in the study show that the methods and classifier models used can help experts in the early diagnosis of MD. Considering the results in Figure 6, it is thought that the preferred method and classifier models in the study are promising for the diagnosis of MD. In addition to the ready-made CNN architectures that are widely preferred in studies, the DCNN model we created in this study was also used in the classification phase. The performance of our proposed model gave better results than the SqueezeNet and ResNet50 architectures in this study. Similar results were obtained according to the AlexNet architecture. Although the proposed model gave successful results in this study, we think that layer and parameter information should be improved and evaluated in different migraine data in the future. In this way, the performance of the model can be interpreted more accurately. However, we think that the proposed methods and models should be evaluated using different migraine data in the future to more accurately determine their effectiveness. The proposed methods and models could also be considered for use in studies regarding the early diagnosis of different neurological diseases based on EEG.
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.