text
string
predicted_class
string
confidence
float16
However, intraseasonal differences in mean macronutrient proportions were revealed when comparing between anthropogenic and natural diets (Spring ANOVA p = .019; Summer ANOVA p < .001; and Autumn ANOVA p = .012) (Table 3; Figure 5). Across seasons, diets receiving anthropogenic subsidies tended to be higher in carbohydrate and lower in protein than natural diets.
study
100.0
EMT of the proportions of macronutrients (protein = P, carbohydrate = C, and lipid = L) in seasonal (spring, summer, and autumn) brown bear diets in populations with natural diets versus those receiving anthropogenic subsidies. The geometric mean for each season is shown by a filled symbol surrounded by 90% and 99% confidence regions. For reference, the blue line represents the preferred optimal proportion of protein (17% ± 4) selected by captive bears
study
99.94
Seasonally, many diets were close to the 17% isoportion line during autumn (Figure 4). The 99% confidence region around the autumn geometric mean proportion included the 17% protein isoportion line, which suggests they are not significantly different at that level. Both spring and summer diets were generally higher than 17% protein. Of the three winter diets reported, two were near the 17% line, while one was noticeably lower (not shown). During autumn, mean diets of both anthropogenic and natural diets had confidence intervals including the intake target region of captive bears (Figure 5); however, diets receiving anthropogenic subsidies made up the majority of autumn diets near to the intake target.
study
100.0
There was considerably more variation in the proportion of protein and carbohydrate consumed in natural compared to anthropogenic diets during autumn, with one population (Gau et al. (2002), Diet_ID: 2; Table 1) consuming very little (2%) carbohydrate. Because this natural diet was a potentially influential observation (as assessed visually in R using a plot of the cooks.distance (Cook & Weisberg, 1982) function in the {base} package), we performed a sensitivity analysis by running a separate LM and ANOVA for autumn without that diet. Anthropogenic and natural diets remained significantly different (ANOVA p = .021) in autumn after removal of the Gau, Case, Penner, and McLoughlin (2002) diet, albeit with a lower mean proportion of protein and lipid and higher proportion of carbohydrate (P = 0.232, C = 0.515, L = 0.253)—thus, the mean autumn diet of natural populations was closer to the intake target region with that diet removed.
study
100.0
A greater number of anthropogenic diets were higher in carbohydrate and lower in protein than the intake target; conversely, there were a greater number of natural diets higher in protein and lower in carbohydrate relative to the intake target of captive bears. Of note, confidence regions around the mean summer anthropogenic diet included the upper end of the intake target region for protein, while an individual diet point lied near to the 17% target line, indicating that the preferred ratio self‐selected by captive bears is also achievable during summer for some bear populations consuming human‐sourced foods.
study
99.94
The range of dietary macronutrient proportions that we observed among brown bear populations supports the nutritional generalism hypothesis that the species has a wide fundamental macronutrient niche. In combination with previous studies documenting the types and compositions of foods consumed by this species, the brown bear can thus be classified as generalist in all three aspects of the multidimensional nutritional niche. Across populations, the geometric mean annual diet of brown bears was close to an equal one‐third proportion (i.e., the simplex barycentre) for all macronutrients despite considerable interpopulation variance, suggesting that the species has a remarkable ability to tolerate the macronutritional characteristics of their nutritional environment. Thus, we provide evidence that one function of omnivory in brown bear is to enable occupation of a diverse range of habitats and macronutritional environments.
study
100.0
Annual diet variation in macronutrient proportions was not equal; however, as across populations, the greatest variation was observed for carbohydrate, while protein and lipid were more codependent. The nutritional explanation for this is that animal prey is a source of both protein and lipid, with negligible carbohydrate content (Coogan et al., 2014). The highest proportion of protein in annual diets, and lowest proportion of carbohydrate, was found in Canada's central Arctic, where bears displayed high levels of predation on caribou (Rangifer tarandus) and ground squirrels (Spermophilus parryii; Gau et al., 2002). Diets relatively high in carbohydrate occurred in ecosystems where bears consumed starchy roots (e.g., Hedysarum alpinum) and fruit (e.g., Munro, Nielsen, Price, Stenhouse, & Boyce, 2006). However, carbohydrate proportions were highest in annual diets with anthropogenic subsidies, such as agricultural crops and supplemental feeding of corn, oats, and wheat (e.g., Paralikidis, Papageorgiou, Kontsiotis, & Tsiompanoudis, 2010; Rigg & Gorman, 2005; Sato, Aoi, Kaji, & Takatsuki, 2004; Stofik, Merganic, Merganicova, & Saniga, 2013). Similarly, the proportion of lipid in annual diets was highest in populations that consumed relatively more domestic livestock (e.g., Clevenger, Purroy, & Pelton, 1992; Dahle, Sorensen, Wedul, Swenson, & Sandergren, 1998). Therefore, our results support the hypothesis that bear diets including anthropogenic food subsidies are, on average, higher in proportions of nonprotein macronutrients, especially carbohydrate.
study
99.94
Seasonally, brown bear displayed significant variation in the proportion of macronutrients consumed, indicating a tolerance for a wide range of dietary macronutrient proportions throughout the active season thereby supporting our third hypothesis. Protein and lipid proportions become less codependent during autumn, which is consistent with the general pattern of decreasing carnivory combined with the consumption of high‐fat autumn food resources in some ecosystems, such as hard mast. The proportion of carbohydrate in bear diets was highest in the autumn, mostly due to the timing of fruit production, while some populations also consumed starchy roots during this period. However, diets of populations receiving anthropogenic subsidies were on average higher in the proportion of carbohydrates and lower in protein, across all seasons. For example, the diets of Greek (Paralikidis et al., 2010), Italian (Ciucci et al., 2014), and Slovakian (Stofik et al., 2013) bears were high in carbohydrate during summer, due to fruit and anthropogenic food consumption. Given that such foods allow bears to consume closer to preferred proportions of macronutrients, it is not surprising that anthropogenic foods are sources of bear–human conflict (Can, D'Cruze, Garshelis, Beecham, & Macdonald, 2014; Coogan & Raubenheimer, 2016; Morehouse & Boyce, 2017). Garbage, which was not considered in this study and seldom reported (e.g., Mattson, Blanchard, & Knight, 1991; Rigg & Gorman, 2005), would likely have a similar effect on diet proportions (Coogan & Raubenheimer, 2016). Conversely, the highly carnivorous natural diet of central Arctic bears (Gau et al., 2002) was highest in protein during the autumn.
study
99.94
The mean proportions of macronutrients consumed by bears in autumn were generally near that self‐selected by captive bears, which supports our hypothesis that the optimal diet preferences of bears coincide with the nutritional environment during the hyperphagic period due to the strong selective pressures associated with hibernation. For instance, there is a strong relationship between the body fat percentages of bears and their survival and reproductive capacity during hibernation (López‐Alfaro et al., 2013; Robbins, Meray, Fortin, & Lynne Nelson, 2012). In addition to behavioral adaptation, brown bears have acquired a suite of physiological adaptations facilitating adiposity while simultaneously remaining healthy (Rivet, Nelson, Vella, Jansen, & Robbins, 2017). Bears primarily gain lean mass, if any, during the spring season when their diets are higher in protein content (Hilderbrand et al., 1999; Swenson, Adamic, Huber, & Stokke, 2007). Yet, the importance of spring lean mass accrual should not be underestimated, as protein is transferred from mother to cub via milk during the hibernation period (López‐Alfaro et al., 2013).
study
99.94
Relative to natural diets, however, bears receiving anthropogenic subsidies were closer to the intake target of captive bears during autumn. Furthermore, there was less variation in autumn anthropogenic diets relative to the intake target region, which suggests that bears in such populations are not only more likely to consume optimal diets but are also buffered from environmental limitations in natural food supply. Thus, brown bears receiving anthropogenic subsidies as part of their diets may have a nutritional advantage over those consuming natural diets. However, bears consuming anthropogenic subsidies may also be more likely than natural populations to consume lower proportions of protein than optimal which may adversely affect fitness outcomes. For example, diets lower in protein than the preferred proportion selected by captive bears resulted in lower rates and efficiency of gain compared to diets higher in protein than the self‐selected optima (Erlenbach et al., 2014).
study
100.0
As mentioned, there was noticeable variation among macronutrient proportions of populations with natural diets, with some noticeably higher in the proportion of protein selected by captive bears. Given that the dietary preferences and optima are expected to be under natural selection, it is thus possible that the intake target of brown bear varies among populations. For example, populations consuming high proportions of protein and very little carbohydrate, such as an in the central Arctic (Gau et al., 2002), may have different intake targets than populations which have evolved under different environmental conditions—even within populations, marked differences in individual foraging behavior (i.e., carnivory versus herbivory) have been observed (Edwards, Derocher, Hobson, Branigan, & Nagy, 2011). Likewise, such dietary adaptation has implications for populations receiving anthropogenic subsidies if their dietary optima shift in response to their nutritional environment, especially if such subsidies no longer become available. At the same time, however, the range of macronutrient proportions observed across populations of this species demonstrates their remarkable tolerance to varying dietary macronutrient proportions.
study
99.94
The wide multidimensional nutritional niche of brown bear supports previous suggestions that, as a species, brown bears may be better equipped to face some of the nutritional challenges associated with climate change, such as changes in available food resources (Roberts, Nielsen, & Stenhouse, 2014). Yet, there may be unexpected relationships between brown bears and changing climate, as their macronutrient preferences may have broad ecological implications when the timing of seasonal food resources changes. One study, for example, found that brown bears preferentially switched to eating fruit that became available several weeks early in place of available spawning salmon they historically consumed during that period (Deacy et al., 2017). Brown bear's preference for high proportions of nonprotein macronutrients was given as a possible explanation for this diet shift (i.e., the proportion of macronutrients in some fruit is very close to the preferred ratio of captive bears; Coogan et al., 2014). This situation is similar to the case of bears receiving anthropogenic subsidies, in that they are able to feed on foods offering macronutritional properties otherwise temporally or ecologically unavailable.
study
99.94
An interesting extension of this research is to explore how dietary macronutrient proportions influence the fitness and population demographics of brown bear. Macronutrient proportions have physiological effects on individual body composition, with high‐protein diets generally resulting in animals with lower body fat and greater lean mass than animals on high‐carbohydrate or ‐lipid diets and vice versa (Solon‐Biet et al., 2014). This pattern can be observed among brown bears, which gain primarily lean mass in spring and fat mass in autumn. Examining other effects of macronutrient proportions on bear populations may be revealing. Low‐protein, high‐carbohydrate diets have been associated with increased longevity and health span across several model organisms; conversely, high‐protein, low‐carbohydrate diets have been associated with reduced lifespan, but increased reproductive parameters (Raubenheimer, Simpson, Le Couteur, Solon‐Biet, & Coogan, 2016).
study
99.9
Furthermore, there is implicit evidence that proportions and amounts of dietary macronutrients interact to affect brown bear population dynamics, as local population density has been related to spatial patterns in the amounts of both ungulates (source of protein and lipid) and fruit (source of carbohydrate) together (Nielsen, Larsen, Stenhouse, & Coogan, 2017). Following the above example, it is important to note that both the proportions and amounts of macronutrients interact to produce biological outcomes; therefore, investigating the relationships between the amounts and proportions of dietary macronutrients, and their possible population‐level effects, is an important area of future research. On the other hand, in many animals, dietary macronutrient proportions predict absolute amounts eaten (Raubenheimer, Machovsky‐Capuska, Gosby, & Simpson, 2014).
study
99.9
Leading from this, we propose that future research examine spatially explicit factors influencing the macronutrient proportions of diet. Increasing carnivory has been hypothesized as a general adaptation to an increase in latitude in omnivorous mammals (Vulla et al., 2009); however, other works have suggested that dietary patterns are better explained by spatially explicit environmental factors (Gaston, Chown, & Evans, 2008). Patterns in brown bear diet were better explained by climatic rather than geographic factors (Bojarska & Selva, 2012). It would be interesting to examine the relationships between such factors and nutrition. Furthermore, we suggest the effects of anthropogenic food subsidies on brown bears at the levels of individuals, populations, and communities deserve more research.
study
99.94
In closing, we present a synthesis of macronutritional niche theory, nutritional geometry, and compositional analysis to produce a novel view of the nutritional ecology of brown bear and functional omnivory more generally. Furthermore, we demonstrate the effect of anthropogenic subsidies on the macronutrient proportions of brown bear diet, and the implications of which are open to future study. Last, while it may be argued that compositional analysis is the appropriate way to analyze proportional data, our univariate tests were in agreement with compositional results. Similar results between these methods have been documented elsewhere, where it was suggested that traditional statistical methods are robust to compositions if variance is not too great (Ros‐Freixedes & Estany, 2013).
study
99.94
The increase in hospitalization costs and being far away from family and nosocomial infections has made looking after patients at home more welcomed . Tracheostomy represents the creation of a gap or valve on the trachea to provide an airway in patients with upper respiratory system obstruction caused by tumors of the larynx, thyroid, and esophagus. Larynx preservation is common in patients who require long-term intubation . In the lack of proper care, complications of tracheostomy include infection, tube duct obstruction due to lack of proper cleaning and accidental extubation of the tracheostomy tube . What should also be considered are the new circumstances in their lives, including an unpleasant change in their appearance and the limitations due to a tracheostomy. This group of patients may suffer from stress, low self-esteem, negative change in their appearance , difficulty in communicating effectively with others and isolation and, as a result, these can have an adverse effect on their quality of life .
other
99.9
There is no comprehensive agreement on the description of the quality of life . However, according to the definition provided by the World Health Organization, quality of life represents people’s understanding of their situation in life regarding culture, the value system of the area they live in, goals, expectations, standards, and their priority. Therefore, the topic is entirely individual and subjective, not observable by other people and according to the personal understanding of different aspects of his or her life . The quality of life contains various dimensions including physical, mental, and social functioning, understanding and perception of each person’ wellbeing and health, disability and lifetime . Researchers believe that the investigation of the quality of life and the efforts to improve it has a main effect on the health of the patient’s personal and social life . Additionally, the quality of life condition has always been considered as being in accordance with the findings in clinical studies, treatment, and health care . Typically, hospitalized patients admitted with any diagnosis, receive routine instructions during hospitalization and discharge, which is common in healthcare centers . The main groups involved in the education are nurses . In most of the cases, this training, which is mostly done orally and face to face, makes the process of instruction problematic due to having a boring, energy-intensive, and time-consuming method, and it needs many tactful, experienced instructors . Results of a study conducted by Deccache et al. in 2012 indicated that only 20% of the hospitalized patients are satisfied with the provided information on the illness and its treatment, while, 60% of them expected more and better information and 20% were completely unsatisfied with the training program . These problems may increase the disease’s complications, readmissions and care expenses . Furthermore, patients can be active in the process of the illness by gaining self-care skills through a proper training on their safety, performance ability . There are varieties of methods to provide education, and it is important that trainers choose the best way to instruct . One of the educational resources is to use videotape tools, for which the practical impact, as a good way to generate interest and increase learning, has been proven . The benefits of video training are the ability to create storage for a lot of information, creating continuity in the data, the absence of anxiety during training and the capability of adding new information to old ones . The other benefit of using video training is using color, motion, and different scenes, which all come with the audio training and represent an inclusive education . Furthermore, this method is inexpensive and affordable .
review
99.9
Reviewing the previous studies, they showed that the impact of audio and video training in promoting quality of life is positive, such as a study by Baraz-Pardenjani showed that the utilization of instructional videos raises the quality of life in hemodialysis cases . The survey results of Sheikh et al. also revealed that audio and video training is effective in improving the quality of life in cases with type II diabetes . The study of Vocht et al. indicated that training with the help of video causes an improvement in the quality of life of elderly patients suffering from psychological issues . Because tracheostomy patients require long-term attention at home, self-care training can be beneficial in reducing the incidence of tracheostomy complications thus increasing the quality of life. Therefore, the conducted research intended to survey the effect the education videotape has on the quality of life of the cases with tracheostomy.
review
99.9
Inclusion criteria included: patients over 20 years old, self-care ability, cooperative ability, not having a history of mental illness, full awareness of the time, place and the person at the time of training, completing the questionnaires, access to facilities for watching training videos, lack of experience or training in health centers and the ability to communicate in Persian language.
other
99.75
The instrument used to gather and record the information in this research included a survey that had three parts and included information about the study and intervention explanation, demographic information and SF-36 quality of life questionnaire . The validity of the content of population data was conducted through a literature review. Afterwards, it was studied by ten professors, and their feedbacks were applied. The SF-36 quality of life survey included 36 questions that investigated eight aspects of life (role limitations because of physical health issues, role restriction because of mental health issues, vitality (energy, fatigue), general emotional health, social functioning, bodily disorder, common health realization, and physical functions). Scores on each scale varied from 0 to 100. Zero proposed the worst and 100 proposed the optimal condition in the report. The translation of SF-36 quality of life survey to Persian language and also its reliability and validity evaluation were performed by Montazeri and colleagues in 2005 .
study
100.0
After the confirmation via the ethics group of Tehran University of Medical Sciences, subjects were chosen though utility sampling, and after obtaining a written informed consent to take part in the research, participants were set into two groups of 5 people based on random block method. In this study, assuming P = 0.5 and the confidence interval of 95%, the total number of samples was 80, and considering the loss, there were 45 people calculated for each group. In the intervention group, one patient was excluded from the study due to the unwillingness to continue participating in the study, and four patients due to defects in their questionnaire. In the control group, five patients avoided completing the survey after two months and were therefore excluded from the comprehensive study. In addition, at the end of the survey, the total number of the participants who completed the questionnaire for each group was 40 people. During discharge, after the regular training of the medical staff, both groups completed the questionnaires mentioned above. In the intervention group, in addition to the regular training received from the medical staff, one educational CD with audio and video features created by the researcher was given to the patients for use at home. The content of the CD contained an introduction to tracheostomy care, showing the daily care the tracheostomy patients need including bathing, shaving, suction, replacing the bandage around tracheostomy, cleaning the tracheostomy tube, the tracheostomy site infection symptoms, how to communicate with others and the way they should present in public. The scientific confirmation of the content of the film was given by ten professors from Tehran University of Medical Sciences. The intervention group could watch the movie with no restrictions regarding the number of times at home. To ensure that it was watched by the patients, two telephone numbers reminded them of this issue. In addition, all the patients mentioned that they watched the video. What should also be mentioned is that there were no restrictions for other members of the family regarding the watching of the movie. After two months, the patients were coordinated by the two groups to have another meeting to complete the questionnaire.
study
100.0
The data were entered into SPSS version 20. Kolmogorov-Smirnov test showed a normal distribution. Thus, an independent t-test was employed to compare the two teams. Paired T-test was employed to make an evaluation before and after a mean of a cluster. The significance level was of less than 05%.
study
97.0
Most participants were in the age group of 60 years and more. 62.5% of the participants in the check team and 52.5% in the invasion team were men. Most participants in the check group, 52.5% were living with their wife and children, and the invasion team, 62.5% were living with the others who were 15% in the check team and 7.5% in the invasion.
study
99.6
The most common job between the two groups was self-employment, 37.5% in the check team, and 40% in the invasion team. The least common job was unemployment, which was 5% in both groups. The highest level of education in the check team was under diploma, 30%, and, in the invasion team, it was elementary school and a diploma, which included 30% for each one, whereas the lowest education level in the check team was primary education, 20%, and academic education, 15%, in the invasion team.
other
99.9
The income of both groups was at a sufficient degree, being of 47.5% in the check team and of 60% in the invasion team. The most common cause of tracheostomy was a laryngeal tumor, which was 40% in the check team and 35% in the invasion team. The least common cause of tracheostomy was tracheal tumor, which was 5% in the check team and 7.5% in the invasion team. The results of the chi-square and chi-square experiment indicated that there was no clear variation among the demographic characteristics of both groups, and being homogenous regarding population characteristics (Table 1).
study
100.0
The mean of the overall quality of life and standard deviation before the intervention was 40.46 ± 2.58 in the check team and 40.30 ± 7.25 in the intervention team. The results of independent t-test showed that the two teams were homogeneous before the intervention (P = 0.125). Also, both teams showed no clear distinction in any of the eight concepts of the quality of life (p > 0.05) before the intervention (Table 2).
study
100.0
After the intervention, the mean and standard deviation in the control group was 41.93 ± 9.28 and in the intervention group, it was 47.12 ± 9.28. The result of the paired t-test indicated that the two groups were statistically different (P = 0.03). In addition, the mean of the overall quality of life in the inversion team was higher than in the check team. Additionally, in all the dimensions of the quality of life, the two groups were statistically different: role limitations because of feelings issues (p = 0.01), vitality (fatigue, energy) (p = 0.03), general mental health (0.005), social functioning (p = 0.006), bodily disorder (0.001), common health realization (p = 0/ 02) and physical functioning (p = 0/ 01), which were present more in the intervention team than in the check one (Table 3).
study
100.0
The comparison between the average and the normal deviation of the overall quality of life in the check group in discharge and the two-months later, depicted that variations were not statistically notable (p = 0.09). Additionally, the comparison between the 8 concepts of the quality of life showed the differences in role restriction due to body health issues (p = 0.03), role limitations because of mental issues (p = 0.001), general mental health (p= 0.03), social functioning (p = 0.04) and physical functioning (p = 0.02), being statically significant for the decrease in the quality of life. No clear difference was observed in other concepts, such as vitality (energy/ fatigue) (p = 0.92) and bodily pain (p = 0.16), when a main increase in the general health perception was observed (p = 0.03) (Table 4).
study
100.0
The comparison between the average and normal deviation of the overall quality of life before and after education, in the intervention group, showed that overall quality of life increased from 40.30 ± 7.25 to 47.12 ± 9.28, and the paired t-test revealed that this variation was statistically important (p = 0.001). Additionally, the comparison between the 8 concepts of the quality of life showed the differences in all demotions: role restriction due to mental issues (0.005), role limitations because of physical health concerns, (0.02), general mental health (0.01), physical functioning (0.04) and social functioning (0.02), bodily disorder (0.02), general health perception (0.005), vitality (energy, fatigue) (0.002), being significantly increased (Table 5).
study
100.0
In the fast-developing contemporary community, electric health concern regularities are presently the best feasible approach to attain enhanced service productivity and quality . This research hypothesized that tracheostomy cases presented to the video education home program guidance would have the most real quality of life criteria associated with tracheostomy cases on regular training. The conclusions of this research pointed out notable developments in the QOL of the cases in the intervention team, which resulted in the approval of the research proposal. The two teams of cases involved in the research were alike regarding all their socio-demographic features. This relationship was necessary to assure that any variations taken after the invasion would not be connected to the variations in these features.
study
99.94
Two months after the discharge from the hospital, the comparison between both groups showed statistically significant differences in the mean score of the overall quality of life and the concepts, that being higher in the intervention group. A research undergone by the State of Iran indicated that the quality of life of patients with a permanent pacemaker after self-care training by video method in the intervention group had significant increases as compared with the ones in the control group. Additionally, the quality of life in emotional, physical, and social concepts was clearly larger in the invasion team . A research by Headley et al. in the USA showed that video education might significantly increase the quality of life and concepts in breast cancer cases in the intervention team as compared to the check team . A research via Bar Pard Anja Ni in Iran indicated that video education might influence the quality of life and the concepts in hemodialysis patients . Mahmoud and Valley revealed that health literacy was sufficient on the quality of life of married women . Also, the study of Salameh et al. in Palestine indicated which electronic program education will have a more positive effect on the quality of life quality in coronary heart problems compared to routine training in the control group . The conclusions of this research are compatible with the above-mentioned studies confirming the significant effect of video teaching method on the quality of life.
review
99.25
The comparison between the quality of life and its concepts in discharge and two months later, in the check team, indicated that the overall quality of life increased with 1.47 points in a range of 100, but this increase was not statistically significant. However, in the majority of the concepts, such as role limitations because of bodily fitness issues, role boundaries since mental issues, common psychic health, human functioning, physical functioning, and general health perception, it decreased significantly. In addition, in the vitality (energy/ fatigue) and bodily pain concepts of quality of life, no clear difference was observed. A study by Atlee et al. showed that the quality of life of the cases with permanent pacemakers decreased in mental and social concepts a month after the implantation of a pacemaker and, no significant difference was observed in the physical aspects . Furthermore, a study by Hashmi et al. indicated that the quality of life of patients with tracheostomy decreased after the implantation of a tracheostomy in the absence of a proper education , which was consistent in this research. Therefore, the results of this study and the mentioned studies indicated that chronic patients, such as patients with tracheostomy who only received routine training in health centers for self-care, experienced a decrease in the quality of life; hence, more attention needed to be paid regarding the training of these patients.
review
99.06
The comparison between the quality of life and its concepts before and after the intervention, in the intervention group, showed that the overall quality of life increased with 6.82 points in a range of 100, this increase being statistically significant and also all the concepts of the quality of life increased, thus being statically significant. The study by Baraz- Pardenjani et al. in Iran showed that the use of video education in hemodialysis patients increased the overall quality of life and also increased the bodily functioning, mechanical function, mental role, social functioning and common health ideas . A research by Stalker in the United Kingdom indicated that the use of video education in hemophilia patients with different educational background increased the overall quality of life, mental and physical concepts . The finding in this study was in accordance with the studies mentioned earlier. Therefore, we suggest the use of video education in addition to routine training received from the clinical staff members, due to its providing a proper source on the correct ways of self-care, being able to cause an increase in the quality of life of the cases with tracheostomy after the discharge from the hospital .
study
99.94
The limitations of the study included individual differences and different reasons of the subjects, which could have an effect on learning how to take care of them. On the other hand, the probability of using mass media, including radio and television or other educational resources were other limitations of this study which were out of control for the researcher but they could happen in both groups. It is suggested that future studies should assess and compare the efficacy of other educational methods regarding the quality of life of tracheostomy cases, the effect of audio–video materials on the incidence of complications and readmissions in this group of patients, and the use of education videos on the quality of life of other patients.
study
99.5
The results of this research indicated that the decrease in the quality of life of cases via tracheostomy occurred after discharge in the check team. Due to the increase in the quality of life of patients in the intervention group following the use of educational videos, the healthcare team, especially nurses, can use this training method additionally to routine care in order to improve the quality of life in cases with tracheostomy, as an educational program to use at home.
study
99.9
This paper is the result of an MSc thesis in Critical Care Nursing in Tehran University of Medical Sciences. I would like to thank the research deputy of the University for the financial support, research associate of School of Nursing and Midwifery, patients and nurses in Amir Alam Hospital, Imam Khomeini Cancer Institution and also the research assistant of Kurdistan University of Medical Sciences.
other
99.94
Hepatocellular carcinoma (HCC) is the fifth leading cause of cancer death worldwide and about 500,000 people die of it each year . More than 90% of HCC cases develop as a consequence of underlying liver diseases, and hepatic cirrhosis occurs in 80% of cases [2–4]. More than 60% of patients are diagnosed with late-stage disease after metastasis has occurred , resulting in an overall 5-year survival rate of <16% . If appropriate treatments are performed in early stage, the 5-year survival rates of HCC patients will exceed 75% . Thus, detection of HCC at an early stage significantly impacts outcomes in patients. The American Association for the Study of Liver Diseases (AASLD) once recommended that AFP and ultrasound examination were used for HCC surveillance in hepatic cirrhosis population, but analysis of recent studies shows that AFP determination lacks adequate sensitivity and specificity for effective surveillance . Novel biomarkers are urgently needed for the screening of HCC to reduce its high mortality; many studies have reported that lens culinaris agglutinin reactive AFP (AFP-L3) and Golgi protein 73 (GP73) are effective for the HCC early diagnosis [9–11], but there has been a lack of clinical follow-up from hepatic cirrhosis stage to HCC. The goal of the present study is to estimate the risk prediction value of some serum markers during the progression from hepatic cirrhosis to HCC.
study
99.9
All study subjects were enrolled at 302 military hospital, Beijing, China, and were followed up during the study period of 36 months, until confirming HCC diagnosis or the date of study end (December 31, 2016). The study population included any hepatic cirrhosis patients over 30 years old who were identified as HBV or HCV infected patients for at least 5 years. Some patients with the following conditions were excluded: patients who were diagnosed with HCC at starting point of this study; patients with other systemic disease such as diabetes and hypertension; patients after surgery, interventional therapy, radiotherapy, chemotherapy, and other invasive treatment; patients suffering from severe complications such as upper gastrointestinal bleeding and hepatic encephalopathy. The final diagnosis was made by liver histopathology or MRI based on guidelines from Ministry of Health of the People's Republic of China and guideline from the Chinese Society of Hepatology and the Chinese Society of Infectious Diseases [13, 14]. The study procedures were approved by the ethics committee of the 302 Military Hospital of China and written informed consent was obtained from each subject.
study
100.0
A total of ten routine laboratory tests were chosen to be analyzed; they were albumin (ALB), total bilirubin (TBil), alanine transaminase (ALT), platelet count (PLT), prothrombin time (Pt(s)), prothrombin time activity (Pt(a)), AFP, GP73, AFP-L3, and AFP-L3/AFP ratio (L3/AFP). Clinical chemistry tests were applied by an automatic biochemical analyzer (AU5400, Olympus, Japan). PLT was detected using Hematology Analyzer (XE-1800, SYSMEX, Japan). Pt(s) and Pt(a) were measured in automated coagulation instrument (CA-7000, SYSMEX, Japan). AFP and AFP-L3 were measured by Automated Immunoassay Analyzer (COBAS6000, ROCHE, Switzerland). Kits for the enzyme-linked immunosorbent assay for GP73 were obtained from Hotgen Biotech (Beijing, China).
study
99.94
The incidences of HCC during the study period were analyzed by examination of medical records. Ten markers at starting point of this study were compared between patients with abnormal serum biomarkers levels, denoted as the positive groups, and those with normal levels, denoted as negative groups. The judgment criteria are as follows: ALB < 35 g/l, TBIL > 19 μmol/L, ALT > 40 U/L, PLT < 100 × 109/L, AFP > 10 ng/mL, PT(s) > 13 s, PT(a) < 75%, AFP-L3 > 1.0 ng/mL, GP73 > 150 ng/mL, and AFP-L3/AFP > 0.05. After 3 years' follow-up, cumulative incidence (CI) and relative risk (RR) of ten markers were calculated for each group to identify potential risk factors for HCC. A chi-square test was performed to compare the incidence rate between the positive and negative groups in our study cohort. All markers at starting point of this study were compared between patients who had developed HCC within 3 years and those who had not developed HCC to explore the early prediction value of serum markers. In order to investigate the dynamic change of the serum markers during the progression of HCC, we had compared all markers in HCC patients for two time points: starting point and the time they are diagnosed with HCC. Normally distributed data were analyzed with Student's t-tests. Other data were tested by the Wilcoxon method. To assess the role of all markers as diagnostic predictive markers for HCC, receiver operating characteristic curves (ROC) were plotted and the area under the curve (AUC) was calculated. All statistical analysis was performed using SPSS 14.0 software (SPSS, Inc., Chicago, IL).
study
100.0
A total of 161 cases were diagnosed as hepatic cirrhosis during the study period. Fifty-two cases were excluded (35 were excluded due to history of other systemic diseases, 3 participants were excluded due to excessive missing data, and 14 patients with confirmed severe complications were excluded). Therefore, a total of 109 patients met the inclusion criteria and were analyzed. All participants had a mean age of 53.9 (SD = 9.7) years, were 60.6% male, 94.5% were with a history of HBV infection (see Table 1). During 36 months of follow-up, 34 out of 109 cirrhotic patients were confirmed to have HCC eventually (31.2%).
study
99.94
We compared the serum markers levels at starting point between patients who had developed HCC and those who had not developed HCC; the results demonstrated that there were 4 markers, including AFP, AFP-L3, ALT, and AFP-L3/AFP ratio, that were statistically significant (p < 0.05); see Table 2 and Figure 1. It is evident that the increase of AFP, AFP-L3, and ALT levels in serum and AFP-L3/AFP ratio are potential precursors of HCC; thus regular monitoring of these markers seems necessary in hepatic cirrhosis patients.
study
100.0
The risk factors analysis showed that incidence rate of HCC in patients with high AFP, AFP-L3, ALT, and AFP-L3/AFP levels were extremely significantly higher than that those with normal levels (RR = 2.99, p = 0.000; RR = 2.92, p = 0.000; RR = 2.72, p = 0.001; RR = 2.34, p = 0.003). Our results revealed that cirrhotic patient with higher levels of AFP, AFP-L3, AFP-L3/AFP ratio, or ALT had a risk of developing HCC and these four markers were risk factors for HCC. In contrast to the previous studies, we find that high GP73 level seemed to be a protective factor for HCC, because elevated GP73 levels were associated with a lower risk of incident HCC (see Table 3).
study
100.0
ROC analysis was used to determine whether serum markers are powerful to predict HCC in the cirrhotic population. The results showed that AFP, AFP-L3, and ALT had relatively good predictive power for HCC progression; AUC were 0.736, 0.744, and 0.693, respectively (see Table 4, Figure 2). The multiple regression analysis suggested that the combination of three markers could not significantly improve the predictive efficacy; the best combination was ALT and AFP, which obtained AUC of 0.780; see Table 4, Figure 3.
study
100.0
Among the 34 HCC cases, 17 were excluded due to incomplete data or nonavailable data. We analyzed the dynamic change of the ten markers in other 17 cases during the progression of HCC, and we found that serum GP73 level was significantly decreased (p = 0.041) in patients when they were identified with HCC. The concentration of GP73 was 194.6 (66.12–350) ng/mL in hepatic cirrhosis and 154.2 (13.14–275.4) ng/mL in HCC. See Table 5 and Figure 4. It showed a gradually decreasing tendency of serum GP73 accompanied by the development of HCC from hepatic cirrhosis.
study
100.0
In the past few decades, many promising candidate biomarkers for HCC had been found, but most of them were not applied to clinical diagnosis due to their limited practicability and high cost [15–19]. Nevertheless, these new markers have potential to be applied in clinical diagnosis for their higher sensitivity and specificity. So far, α-fetoprotein (AFP) and imaging technology (e.g., ultrasound or computed tomography) are two primary methods to diagnose HCC in hospitals. AFP has been used as a serum marker for HCC for many years, but its sensitivity was only about 39%–65% . AFP-L3, which is the main glycoform of AFP in the serum of HCC, was proven to be an excellent biomarker with sensitivity 75% to 96.90%. High percentage of AFP-L3 has been shown to be associated with poor differentiation and biologically malignant characteristics, worse liver function, and larger tumor mass; some experts thought that the AFP-L3/AFP ratio is more helpful in diagnosis and prognosis of HCC [21, 22]. However, Miura and his coworkers showed that AFP-L3 could not provide an entirely satisfactory solution to detect HCC at the early stage . Our study showed that patients with higher levels of AFP, AFP-L3, AFP-L3/AFP ratio, and ALT have higher risk of developing HCC than those with normal levels of these markers, suggesting that these four markers are potential precursors of HCC in hepatic cirrhosis patients and that serum AFP, AFP-L3, AFP-L3/AFP ratio, and ALT may be useful markers for indicating the development of HCC.
study
99.9
GP73 is a resident Golgi-specific membrane protein expressed by biliary epithelial cells in normal liver. A meta-analysis reported that GP73 is a valuable serum marker that seems to be superior to AFP and can be useful in the diagnosis and screening of HCC . However, Tian et al. indicated that GP73 elevated not only in HCC but also in other chronic liver diseases such as hepatic cirrhosis and hepatitis; even more, the concentration of GP73 in HCC (median = 107.3 μg/L) was significantly lower than hepatic cirrhosis (median = 141.2 μg/L) patients, but their conclusions may suffer from sample selection biases. In our study, followed-up experiments were conducted to assess the dynamic change of the serum markers during the progression of HCC. Our findings further confirm that serum GP73 level was significantly decreased during the progression of HCC. Some research results found that GP73 protein and mRNA expression increase gradually in chronic liver diseases, not only in the hepatocytes, but also in activated stellate cells which are the most important cell type in hepatic cirrhosis [25–27]. Therefore, maximal GP73 concentrations were observed in hepatic cirrhosis rather than in HCC.
study
99.94
Although this study is limited by the small sample size and short study duration, our data suggest that higher serum levels of AFP, AFP-L3, AFP-L3/AFP ratio, and ALT were risk factors associated with the development of HCC and the detection of GP73 has a certain guiding significance to predict the risk of HCC in hepatic cirrhosis patients; regular monitoring of these serum markers in hepatic cirrhosis patients is necessary.
study
99.94
The goal of maternal immunization is to boost maternal levels of specific antibodies to provide the newborn and young infant with sufficient IgG antibody concentrations at birth to protect them against infections occurring during a period of increased vulnerability, until they are able to adequately respond to their own active immunizations or infectious challenges. Newborns and young infants are at greatest risk of morbidity and mortality from infectious diseases, and they depend of maternal antibodies to resist these infections in early life. Maternal antibodies can be optimized during pregnancy given that pregnant women have intact humoral immune responses to vaccines and adequately produce antibodies, which can be efficiently transferred to the fetus through an active receptor-mediated transport system in the placenta. Higher concentrations of antibody at birth result in protection from infection and disease, or in delayed onset and decreased severity of various infectious diseases in the newborn. Examples of this concept include passive maternal antibody protection against tetanus, pertussis, respiratory syncytial virus (RSV), influenza virus, and group B streptococcus (GBS) infections, among others.
review
99.9
Research on maternal immunization is not new; as vaccines were developed, their administration to pregnant mothers to protect them and/or their infants was considered and evaluated, including protection against small pox with vaccinia vaccine in the late 1800s, whole cell pertussis vaccine (DTP) in the 1940s, influenza vaccine after the 1950s pandemics, and tetanus toxoid vaccine to prevent maternal and neonatal tetanus worldwide since the 1960s. Despite the success of the Maternal–Neonatal Tetanus Elimination program of the World Health Organization (WHO) (http://www.who.int/immunization/diseases/MNTE_initiative/en/), there was a paucity of active research on maternal immunization for several years in the twentieth century, in part due to concerns with the safety of administering any drug or biologic to women during pregnancy, particularly after the experience of the drug thalidomide in the 1960s, which was associated with severe limb and other deformities in infants born to women who took this unlicensed medication in the US to treat hyperemesis gravidarum.
review
99.8
The potential impact of maternal immunization as a public strategy to prevent disease in mothers and infants is well recognized. Yet, there are no vaccines currently approved or licensed specifically for use in pregnant women. Licensed vaccines that are recommended for non-pregnant adults may be administered to pregnant women based on need and a risk:benefit assessment. When the risk of exposure and disease from a vaccine preventable infection is high for a mother and/or her fetus, and an effective vaccine is available, the benefit of the vaccine protection is greater than any potential theoretical risk from the vaccine, which is in turn considered to be lower than the risk of acquiring the infection and disease the vaccine can prevent. Licensed vaccines that have not been formally evaluated in or approved for pregnant women are therefore recommended for administration during pregnancy by the WHO and the US Centers for Disease Control and Prevention (CDC), as well as local organizations in many countries (1, 2) (Table 1). These recommendations have evolved over time, and they differ in that the current WHO recommendations do not specifically recommend pertussis vaccination during pregnancy, except when there is a known high burden of disease, as implemented in several countries such as Canada and Australia; while CDC and other Public Health programs such as in the UK, recommend routine vaccination of all pregnant women with the tetanus, diphtheria, and reduced acellular pertussis antigen content (Tdap) vaccine for all women, at every pregnancy. The specific timing of administration of this vaccine is also variable in different countries. Similarly, while tetanus vaccination is recommended for all pregnancies by WHO, most industrialized countries in Europe and North America, where pediatric vaccination coverage is high and the risk of tetanus infection at birth is negligible, do not routinely recommend tetanus vaccine administration during pregnancy. It is only given now because of the use of Tdap. Finally, influenza vaccination during pregnancy is considered an essential element of prenatal care in the US, and pregnant women have one of the highest influenza vaccination coverage rates in this country. However, while pregnant women are not excluded from influenza vaccination, routine administration is not the standard in most countries.
review
99.9
aInfluenza vaccine is recommended by WHO for administration in pregnant women in regions where influenza vaccine programs are already in place. Influenza vaccination is recommended as part of routine antenatal care in the US and several countries in Latin America.
other
99.94
Given that currently licensed vaccines are not specifically indicated for pregnant women, there might be reluctance by some providers and government agencies worldwide to recommend routine vaccination in this population. However, the US Federal Drug Administration (FDA) addresses this concern by approving labeling clearly stating in that licensed vaccines that are recommended for pregnant women (such as influenza and Tdap) are NOT contraindicated for use in pregnant women, and specific considerations regarding safety of use during pregnancy are addressed in the pregnancy subsection of the FDA approved labeling (3). Furthermore, the safety of these vaccines continues to be monitored through post-licensure surveillance mechanisms, such as pregnancy registries and large passive and active adverse event reporting and surveillance systems (4).
review
80.7
Ensuring and evaluating the safety of vaccines administered to pregnant women is a key component of any maternal immunization program or recommendation. This is particularly true now that new vaccines that can benefit pregnant women and their infants are being developed, such as vaccines to protect against GBS and RSV. An important issue is the need for harmonization of standard definitions of key safety outcomes after maternal vaccination and of a systematic approach to the assessment of safety throughout the life cycle of a vaccine, but particularly after implementation as large number of pregnant women are vaccinated. It is critical to consider the inherent risks associated with pregnancy itself, and to clearly understand the background rate of these risks in specific populations. Furthermore, to evaluate the impact of maternal immunization as a public health strategy to impact the burden of morbidity and mortality associated with the infection it prevents, it is necessary to establish baseline rates of these outcomes to demonstrate the efficacy and benefit of the vaccines in both mothers and infants. Finally, the ethical and regulatory aspects surrounding the inclusion of pregnant women as research subjects also influence the progress of the development of vaccines for maternal immunization.
review
99.9
Substantial progress has occurred in maternal immunization research (Table 2). Maternal immunization research has been supported by National Institutes of Health in the US for decades, spanning basic science, clinical, epidemiological, and translational research (5). Studies of relevant pathogens, including GBS, Haemophilus influenzae type b, Streptococcus pneumoniae, and tetanus were conducted during the 1980s and 1990s; studies of pertussis and RSV were prioritized from the 1990s to the first decade of the twenty-first century, while studies of seasonal and pandemic influenza vaccine studies have been conducted continuously for 40 years. Experimental and licensed vaccines for these pathogens were evaluated in phase I/II clinical trials in pregnant women under contract with various public and academic institutions in the US. Furthermore, these programs promoted research related to maternal immunization from vaccine antigen identification to the development of pertinent laboratory assays and reference materials, as well as animal models and developmental toxicity studies, and epidemiology and safety studies. In 2013, guidance documents on research, protocol design, and assessment of safety of vaccines during pregnancy were developed (6, 7). Other guidance documents have since been published, providing a framework for the study of vaccines and other biologics in pregnant women.
review
99.9
In 2008, a pivotal study conducted in Bangladesh was published (14). This study demonstrated for the first time that maternal vaccination with influenza vaccine can protect mothers and their infants from laboratory confirmed influenza illness, with an efficacy in preventing infant influenza of 63%, similar to that achieved with active immunization. This study led to the support to three large studies of influenza vaccination of pregnant women by the Bill and Melinda Gates Foundation, conducted in Nepal, Mali, and South Africa. These seminal studies have now been completed, contributing significantly to the knowledge of the benefits and safety of influenza vaccination of mothers and infants, including HIV infected women, and providing critical information to guide decisions and policies surrounding maternal immunization (15–17). One important contribution of these trials was the determination of the relatively limited duration of protection of infants provided by maternally derived antibody, which decreased substantially after the second month of life (39). The 2009–2010 influenza pandemic was another critical event that resulted in the subsequent prioritization of maternal immunization research in the US and worldwide. The number of clinical trials and publications on the topic of maternal immunization has increased substantially since the pandemic. Importantly, the knowledge gained in aspects related to safety, immunogenicity, and implementation of influenza vaccines for pregnant women has resulted in more advances in this field than ever. An example of this was the acquisition of data on the safety and effectiveness of adjuvanted influenza vaccines in pregnant women (40). In general, there is a need for more immunogenic vaccines for use in all populations, including pregnant women, to improve effectiveness and further reduce the impact of influenza.
review
99.9
In 2012, prompted by evidence of reemergence of pertussis disease and associated infant mortality, maternal immunization with Tdap was recommended in the US and the UK as the most immediate and direct intervention to decrease pertussis in the first few months of life (22). Several other countries with high burden of pertussis disease in the Americas, Europe, and Australia also adopted this recommendation. Importantly, research on maternal immunization with Tdap flourished, filling critical gaps of information, such as understanding the optimal timing for maternal vaccination in the second trimester of gestation to achieve higher antibody concentrations in infants at birth, and better and longer duration of protection in the first few months of life until active immunization with pertussis containing vaccines is achieved (41). Another relevant concept associated with the utilization of Tdap vaccine in pregnancy is the potential blunting of infant immune responses to active immunization when high concentrations of maternal antibodies are present. This has been observed and documented for various antigens in the pertussis vaccines, including pertussis toxin, filamentous hemagglutinin, and pertactin, but relatively lower concentrations of vaccine-specific antibodies in infants after primary vaccination have not been associated with increased incidence or severity of pertussis disease in infants of vaccinated mothers, and preservation of priming and memory immune responses has been documented (42–45). Furthermore, the safety and effectiveness of the Tdap maternal immunization program have been demonstrated in the US and the UK, supporting continuation of this intervention in these countries (23–26). Similar programs are in place now in Latin America and other countries and regions with high burden of pertussis disease.
review
99.9
Currently, several studies are ongoing assessing various aspects of the use of licensed vaccines such as influenza and pertussis in pregnant women, as well focusing on the development of new vaccines specifically designed for administration during pregnancy, for the protection of infants against RSV and GBS in early life. Numerous RSV and GBS vaccines are in various phases of development, from preclinical to clinical trials, supported by multiple stakeholders from industry to private and public organizations (34–36). One RSV vaccine is currently in phase III of clinical development, promising, if successful, to be the first vaccine developed and licensed for specific use in pregnancy. Achieving this milestone has the potential to positively impact and change the landscape and practical applicability of infant disease prevention through maternal immunization. In addition to research focused on basic placental biology and immunology, understanding the role of passive and breast milk antibodies in infant protection and responses to natural infection and active immunization, and determining how to optimize maternal intervention to improve its safety and efficacy, other aspects that require further study include those related to acceptance, feasibility, and logistics of implementation of maternal immunization in different settings and populations. Furthermore, aspects related to education of mothers and providers, utilization, communications, and long-term surveillance and assessment of vaccine safety are paramount for the success of maternal immunization as a public health strategy to improve maternal and child globally. The field of maternal immunization research is therefore open, active, and rich.
review
99.9
The perception of risk of any intervention during pregnancy has evolved over time. Before the demonstration that the use of thalidomide during pregnancy was associated with birth defects, there were relatively little restrictions to what pregnant women were exposed to (46). This tragic association resulted in a shift toward strict restrictions of what pregnant women could be exposed to, including medications and vaccines, and the exclusion of pregnant women from research. However, there has been a culture change in recent years, driven by the need to develop effective immunization strategies and understanding that pregnant women and their infants can actually benefit from participating in clinical research. Their participation in clinical trials of vaccines and therapeutics ultimately will reduce any potential harm of these products, by generating useful information that is specifically relevant to pregnancy, and avoiding exclusion of women from receipt of potentially beneficial interventions available to the rest of the population. Having access to the benefits of participating in research and the results of this research will promote and improve maternal, fetal, and infant health. Clinical studies in pregnant women are carefully designed to minimize the risks of the intervention, particularly the risk to the fetus, and to balance the risk of participating in research with the risk of not having a potentially beneficial intervention available for mothers and infants.
review
99.9
Several recent milestones have been reached in the regulatory aspects of the assessment of vaccines for use in pregnancy (27). It is clear that for both novel vaccines, as well as for currently licensed vaccines not previously evaluated in pregnant women, regulatory agencies approval for use during pregnancy would result in inclusion of specific information in the product label that would facilitate the acceptance and use of the vaccine by health-care providers and the public in general. One important step toward facilitating the utilization of vaccines in pregnancy is the recent update to the US FRA pregnancy and lactation labeling rule, whereby product label pregnancy risk categories designated with letters as A, B, C, D, and X that were difficult to put into practice have been replaced with a narrative descriptions of the risks of using the vaccine during pregnancy, as informed by any source of information, including both observational and prospective studies (28). In 2015, vaccine manufacturers sought guidance from the Vaccines and Related Biological Products Advisory Committee of the FDA to work toward the development of vaccines for maternal immunization. In their fall meeting, the determination was made that the regulatory approval process of vaccines indicated for maternal immunization to prevent infant disease would be guided by regulations outlined in Title 21 of the Code of Federal Regulations and standards set forth in applicable documents such as the ICH guidelines and FDA guidance documents (29). The groups agreed that the path to development and licensure of a vaccine for pregnant women would be product specific and designed to support the indication being sought. Key aspects to consider would include the use of serologic endpoints as markers of passive protection in the infants, the evaluation of duration of immunity and immune interference with childhood vaccines, and the duration and type of safety follow-up. Importantly, the committee considered that observational studies could be used as an approach to confirm the effectiveness of already licensed vaccines that are recommended for use in pregnancy in the US.
review
99.9
Progress has also been made in regulations that further expand the options for pregnant women to be included in research. The updated “Common Rule,” which is the set of federal regulations for the ethical conduct of human subject research in the US, clearly delineates that pregnant women or fetuses may be involved in research if several conditions are met, including the prior conduct, when scientifically appropriate, of preclinical studies, including studies on pregnant animals (such as reproductive toxicology studies), and clinical studies, including studies on non-pregnant women (30). The document also delineates the risk categories for research based on the prospect of benefit for the women or the fetus, indicating that the risk of the research needs to be balanced with the prospect for benefit for the women OR the fetus, or if there is no such prospect of benefit, the risk to the fetus should be not greater than minimal when the purpose of the research is the development of important biomedical knowledge which cannot be obtained by any other means. The pregnant mother is given the right to provide consent for herself and for her baby, unless the prospect of direct benefit is solely to the fetus, then the consent of the pregnant mother and the father should be obtained, with exceptions allowed based on specific situations that would prevent the father from signing. These provisions help guide the Institutional Review Boards in their decision making regarding the participation of pregnant women in research.
review
71.56
Other advances relate to the change in classification of pregnant women from being considered a “vulnerable” population for research, to no longer being considered “vulnerable.” This challenge for maternal immunization was addressed by the National Vaccine Advisory Committee to the Department of Health and Human Services, who also recommended the prioritization of maternal immunization as a public health strategy, and the investment in the development of vaccines for pregnant women (31). Globally, the 2017 updated International Guidelines for Health-Related Research Involving Humans of the Council for International Organizations of Medical Sciences, in collaboration with the WHO, also conclude that women must be included in health-related research, unless a good scientific reason justifies their exclusion, and that women should provide informed consent for themselves (32). Finally, the 21st Century Cures Act, as law enacted by the US Congress in December 2016 and designed to help accelerate medical product development and faster access to patients to innovations, established a task force on research specific to pregnant women and lactating women, to provide advice and guidance to the Secretary of HHS, to address gaps in knowledge and research regarding safe and effective therapies for pregnant and lactating women, and authorized substantial funds for this task (33). A key provision of this law was the inclusion of vaccines administered during pregnancy in the Vaccine Injury Compensation Program, thereby providing coverage for claims of potential vaccination adverse effects on the fetus and the mother, for providers who administer vaccines to pregnant women. Specifically, the law states that “…both a woman who received a covered vaccine while pregnant and any child who was in utero at the time such woman received the vaccine shall be considered persons to whom the covered vaccine was administered and persons who received the covered vaccine.” This provision is a tremendous step toward the improvement of acceptance, confidence, and coverage of maternal immunization in the US.
review
99.25
In addition to the work of the NIH and investigators involved in maternal immunization research, one of the organizations that provided early contributions toward the goal of developing a consensus and harmonized assessment of the safety of vaccines during pregnancy is the Brighton Collaboration. This independent and non-profit partnership was formed in the year 2000 as a voluntary international group seeking to facilitate the development, evaluation, and dissemination of high quality information about the safety of human vaccines. The group stated by developing a common language and standardized research methods to improve the accuracy and consistency of vaccine risk assessment. In 2014, stemming from a call by WHO, and with support from the Bill and Melinda Gates Foundation, the GAIA (Global Alignment on Immunization Safety Assessment in pregnancy) consortium was formed, with the goal to develop a globally concerted approach to actively monitor the safety of vaccines and immunization programs in pregnancy (20). The GAIA group utilizes the format of the Brighton Collaboration to assess safety outcomes in mothers and infants after maternal vaccination, determining the level of certainty in the assessment of the event, to ensure uniformity and comparability in different settings. In addition to pertinent clinical case definitions, the GAIA consortium also published guidelines and tools for the assessment of safety of vaccines in maternal immunization clinical trials (47, 48). These guidelines were supported by the Global Advisory Committee on Vaccine Safety of the WHO (21), and various clinical case definitions are undergoing evaluation and validation as they are utilized in various settings from retrospective, to observational and prospective clinical trials worldwide.
review
99.9
Maternal immunization has the potential to significantly improve maternal and child health worldwide by reducing maternal and infant morbidity and mortality associated with disease caused by pathogens that are particularly relevant in the perinatal period and in early life, and for which no alternative effective preventive strategies exist. Active research encompassing all aspects related to vaccines for administration during pregnancy is underway, with support of multiple stakeholders and global participation. Substantial progress has been made, and the availability of new vaccines licensed for use in pregnant women is an achievable goal. While many challenges remain to be addressed, the achievements in maternal immunization research to date have advanced the field and the prospects to make maternal immunization a feasible and accessible strategy to improve global health.
review
99.9
Bordetella pertussis is the primary causative agent of whooping cough (pertussis), a respiratory disease most severe in unvaccinated infants. The introduction of vaccines against pertussis dramatically reduced disease incidence worldwide. However, many countries have recently experienced disease resurgence, in part due to genetic divergence of circulating strains. The resulting antigenic mismatch with vaccine references has led many to conclude that B. pertussis is evolving under vaccine-driven selection (1–5). Adaptation of B. pertussis is complicated by the varied administration of whole-cell and acellular vaccines between countries and the diversity of reference strains used for vaccine production (6–8). Here, we report the complete genome sequences of two such strains used in manufacturing pertussis vaccines: B202 (Lederle Laboratories, strain 134) and B203 (Sanofi-Pasteur MSD, strain 10536) (9).
study
99.94
Whole-genome shotgun sequencing was performed using a combination of the PacBio RSII (Pacific Biosciences, Menlo Park, CA), Illumina HiSeq/MiSeq (Illumina, San Diego, CA), and Argus (OpGen, Gaithersburg, MD) platforms, as described previously (10). Briefly, genomic DNA libraries were prepared for PacBio sequencing using the SMRTbell template prep kit 1.0 and polymerase binding kit P4, while Illumina libraries were prepared using the NEBNext Ultra library prep kit (New England BioLabs, Ipswich, MA). De novo genome assembly of filtered reads was performed using the Hierarchical Genome Assembly Process (HGAP version 3; Pacific Biosciences) at 130× and 144× coverage for B202 and B203, respectively. The resulting consensus sequences were determined with Quiver (version 1), manually checked for circularity, and then reordered to match the start of reference strain Tohama I (accession no. CP010964) (10). To ensure accuracy, assemblies were confirmed by comparison to BamHI and KpnI restriction digestion optical maps using the Argus system (OpGen) with MapSolver (version 2.1.1; OpGen) and further polished by mapping either Illumina HiSeq PE-100 or MiSeq PE-300 reads using CLC Genomics Workbench (version 8.5; CLC bio, Boston, MA). Final assemblies were annotated using the NCBI automated Prokaryotic Genome Annotation Pipeline (PGAP).
study
100.0
The average G+C content of both B202 and B203 was 67.1%, with genome sizes of 4,128,979 and 4,134,643 bp, respectively. Genome annotation identified 3,645 protein-coding genes in B202 and 3,636 protein-coding genes in B203. Both genomes encoded three rRNA operons and 51 tRNAs.
study
100.0
The assemblies were distinct from genomes of vaccine reference strains Tohama I (GlaxoSmithKline, accession no. CP010964), CS (China, accession no. CP010963), and 137 (Brazil, accession no. CP010323), which have been sequenced previously (10, 11). B202 and B203 were not related, and their genomes differed from that of Tohama I by multiple rearrangements, as well as 186 and 410 single-nucleotide polymorphisms (SNPs), respectively. The genome of B202 was phylogenetically and structurally similar, but not identical, to other strains with the profile prn1-ptxP1-ptxA2-ptxB2-fimH1, such as clinical isolate H375 (accession no. CP010961) (10). B203 appeared to be closely related to Brazilian vaccine strain 137, sharing allele profile prn7-ptxP2-ptxA4-ptxB2-fimH1, but differed by 13 SNPs and a single ~74-kb inversion flanked by rRNA operon copies.
study
100.0
Midwifery education varies across the world. In Iran, it is a four-year undergraduate program. The admission process for studying undergraduate midwifery is based on a competitive national examination. The higher the rank of the exam, the more chance of being offered a place on the course of midwifery. Approximately all of the midwifery students are school leavers, and they are not experienced in hospital environments.
other
99.94
Ministry of Health and Medical Education has designed the undergraduate midwifery curriculum for all universities across the country. A substantial amount of the course is allocated a variety of clinical skills during placement. After the first semester, students enter clinical settings in groups of 4 to 8 members under the supervision of clinical instructors. These are faculty full- or part-time employees and are not hospital midwives. To complete the course, the students should be the main birth attendant in at least 80 normal births, and then successfully pass the final clinical exam indicating that the students demonstrate their competency in managing different patients in clinical situations.
other
99.9
The provision of maternity service in labor wards of Iran is mostly organized by the medical model of care, and the midwives work under the supervision of obstetricians. There are few cases of the midwifery-led model in which autonomous midwives attend the births of their clients.
other
99.9
The undergraduate midwifery course aims to equip the students with the practical skills necessary to become practicing professional midwives.1,2 Clinical skills underpin midwives’ professional practice, and therefore students should have an opportunity to learn, develop and master clinical skills.3
other
99.9
To provide these opportunities, understanding the teaching and learning process of midwifery students will enable midwifery educators to enhance clinical competencies. Therefore, there is an international call for exploring midwifery students’ experiences of learning clinical skills.4,5 Indeed, very little was found during the literature review of this topic.
other
99.44
Some studies showed different aspects of student midwives’ learning experiences, however. For example, in a study, midwifery students stated that midwives who train the students needed to be updated about the teaching and learning strategies in clinical settings.6 In another study in Australia, students perceived achievement of competency standards and confidence to practice difficult because of the restricted nature of midwifery practice within hospitals in which they were learning.7 Furthermore, the results of a study in England showed the importance of providing adequate support and feedback by educators and mentors to promote the transfer of knowledge and skills into the workplace.8 Further research has also shown that both the midwives and the clinical settings might generate educational sources of stress in midwifery students.9
review
58.34
In Iran, achievement of the competencies for student midwives has been studied. The findings of these studies show that midwifery skills in today students have declined in comparison with the last generation students.10-13 Lead midwives for education should take into account these findings and carry out further research to improve the quality of clinical skills of midwifery students in Iran.
study
99.44
In Iran, there are also a number of surveys about midwifery students’ perspective on the current status of clinical education14,15 clinical education problems,16 stressors,17 students’ satisfaction with clinical education,18 perceived feedback19 as well as support and supervision20 in the clinical environment. All of these studies were conducted using quantitative approaches, so they did not provide a rich description of the learning process of midwifery students. To our knowledge, only one phenomenological study was conducted in Iran focusing on the experiences of midwifery graduates about clinical learning. In this study, the researchers concluded that instructor performance, pre-clinical training, and students’ satisfaction are the key factors associated with clinical skills learning. Lack of peripheral facilities, along with lack of coordination of educational planning, as well as behaviors of health care personnel are inhibiting factors for learning clinical skills.21
study
99.94
These findings show that there has been little qualitative analysis of clinical skills learning in midwifery students and much uncertainty still exists about midwifery education in Iran. Consequently, providing a rich and thick description of the experience of students using a qualitative inquiry approach provides readers with a proxy experience for improving the quality of midwifery education in Iran. Hence, the objective of this investigation is to explore the experience of midwifery students in the context of learning clinical skills. This study also aimed to address the following research question: how midwifery students experience clinical skills learning in Iran?
study
99.94
A multi-center qualitative study was conducted in three universities: Tehran University of Medical Sciences (TUMS), Shahid Beheshti University of Medical Sciences (SBUMS), and Isfahan University of Medical Sciences (IUMS). The participants in this study were midwifery students. Prior to undertaking the investigation, ethical approval was obtained from the ethics committee of Isfahan University of Medical Sciences. With regards to the ethical considerations, the researchers elaborated the aim of the study, who will conduct the interviews, and how the data will be used. In addition, the students understood that their participation was entirely voluntary, and their responses were anonymous. All responses were also kept confidential. What is more, the students were aware that they could easily withdraw from the study at any time and without fear of retribution. Before the research proceed, the written consent forms were signed off by the res students to indicate consent.
study
99.94
As regards informing participants, lecturers explained the purpose of the study for the students and then invited them to take part in the study. The lecturers received the contact numbers from those students who agreed to take part in the study. Next, GA contacted the students to arrange a convenient time and place for the interview.
study
55.8
For this study, qualitative data were collected using convenience sampling. Semi-structured interviews conducted with 12 students to collect focused, qualitative textual data. In addition, one focus group discussion was conducted with six final year midwifery students was directed. The purpose of the focus group discussion was to reveal dynamics and issues, and, also for cross-checking and triangulation the qualitative data from different sources which in turn enhance the validity of the study.22 Additionally, the focus group discussion confirmed data saturation.
study
99.94
The interviews were held at times and places that were convenient to each participant. Examples of interview questions to elicit the students’ learning experience of clinical skills learning were: ‘Would you tell me about your experiences of clinical skills learning in midwifery?’, ‘Which factors helped you to learn the clinical skills better?’, ‘Would you please tell me about your relationship with others (for example midwives, instructors, …) in a clinical setting?’ The interviewer probed the student responses by using questions or statements, such as ‘could you tell me something more about that?’, ‘could you give me an example’ and ‘what you mean by?’
other
99.9
In the focus group discussion, the students were encouraged to talk to one another, ask questions, exchange anecdotes and comments towards each other’s experiences and perspectives. Each interview lasted between 40 and 60 minutes, an average of 50 minutes. The focus group discussion lasted 85 minutes. Each of the interviews was digitally recorded and then fully transcribed. Interviews were conducted in the Persian language. Using the back-translation process, the data were translated into English for this paper.
other
99.0
In the first step, the interview transcripts were re-read carefully to gain a deeper understanding of the qualitative data. In the second step, interview transcripts were split into small meaningful units. In the third step, the small meaningful units generated categories by bringing several codes together. In the final step of data analysis, categories were reviewed for emerging themes and labelling them. GA conducted these steps manually. However, the research team was also involved in the steps of the analysis process. It should also be emphasized that before the data were submitted to content analysis, the interview transcripts were sent to the students to approve. All students were satisfied with the interview transcripts.
study
96.25
Six broad themes were generated from the interview transcripts. They are: 1) limited opportunities to experience skills, 2) difficulties with the course plan gaps, 3) need for creating a supportive clinical environment learning, 4) learning drives, 5) confusion between different methods, And 6) stress in the clinical setting. Short verbatim quotations from the participants are presented to provide evidence for the interpretation of data.
other
99.2
Most of the participants expressed great concerns about accessing clinical learning opportunities. Because of the high number of students in clinical placements groups, they sometimes had to wait a considerable time for their turn for clinical experience. As a result, some students entered the fourth year of education without sufficient competency even in carrying out the primary tasks such as establishing the intravenous access. In these situations, the students’ confidence decreased, and they became worried about that they could not achieve the minimum midwifery experiences before the end of the course. One student commented:
other
99.9
“I only managed two births by semester seven and actually in both of them my instructor did the main tasks. In the first clerkship experience in semester seven, I managed a birth very clumsily. My instructor shouted: aren’t you a final year student?!” (Semester 8 student)
other
99.94
To achieve 80 required vaginal births within the timeframe of the educational program, some universities arranged fourth-year students to undertake their placements in different non-educational hospitals which had high birth rates. These hospitals were not student-oriented and mostly regarded the students as workforce than learners. At these hospitals, students managed births without the supervision of an instructor, wishing to get help from staff. However, most of the staff were not eager to accept the responsibility of teaching students due to their heavy workload. The students frequently stated that the personnel made them do ward chores rather than attend the birth. These poor educational environments made participants frustrated and distressed. Besides, there were limited learning outcomes for them in these situations. One student stated that:
other
99.9
In the universities of the study, theoretical and practical parts of each study units were provided simultaneously in one semester. It became problematic when the students, particularly at the beginning of the semesters, entered clinical fields while they had not learned the prerequisite related theoretical knowledge yet. Participants felt that in these situations their optimal learning was hindered:
other
99.9
According to the curriculum of the undergraduate midwifery course, there are not any obstetrics study units for provision in semester six. During this gap, students identified that their previously learned skills and knowledge diminished. They found it difficult and time-consuming to retrieve the previous level of competency at the beginning of the semester seven. For example, one student pointed out:
other
99.9
The supportive clinical environment encompassed receiving support from their clinical instructors and the staff for our participants. The study participants regarded instructors supportive if they trusted the students and provided them opportunities to experience while we are ready to support in case of a need:
other
99.94
Since the clinical training took place in the territory of midwives, their cooperation and support were very important in this respect. In the view of our participants, the supportive manner of staff meant being patient with the students when they were carrying out the procedures and not putting pressure on them to hurry up. Although there were limited numbers of staff who were supportive of this study, their behavior was inhibiting for clinical learning in most of the cases:
other
99.9
There were numbers of motives that pushed the students to learn. First, clinical midwifery practice seemed attractive for some of them. They chased the minutes to attend the birth. As their competency expanded and they learned new skills, they became encouraged to experience more and more skills:
other
99.94
"I love to manage a birth. It is wonderful when you help a birthing mother… I love that when I catch the baby and when I am witnessing the happiness and emotion of the mother just after birth. As I progressed with my midwifery practice skills, I felt a strong enthusiasm to experience more." (Semester 8 student)
other
99.94
Nearly all of the participants stated there were times that they felt confused about what is really true. One of the causes of confusion was the gap between theory and practice. This means that students' expectations, created by the theoretical education in the university, did not match the situation they faced in the clinical setting in many cases:
other
99.9
Another difficulty in this area was raised from the lack of continuity of the clinical instructors. Sometimes, one practical study unit in a clinical placement was educated by different instructors each week. Each of these trainers had their own specific methods and expectations in the procedures. As a result, the students became confused and unsecured:
other
99.9
"The first time I was at a birth, I was afraid a lot. It was terrible. The midwife cut the perineum… I was so scared… This was terrifying… The scissor was tearing the tissue apart… It was scary to tear apart someone’s muscles… And then, the birth of the baby’s head was terrifying too. His hair was stained with blood …" (Semester 6 student)
other
88.1
Moreover, the human environment of clinical placement could be a source of stress for the students. Poor relationship of some clinical trainers and staff with students, struggling over birth management with residents, and fear of being reprimanded due to probable errors, were some of these types of stress sources:
other
99.9
"On the way to the hospital, I am always worried about lots of things…. Not to do anything wrong… Not being criticized by my instructor or midwives… How should be today’s fight [smile] over the birth … these are my real concerns which make me sick." (Semester 7 student)
other
99.94
This study set out with the aim of exploring the midwifery student experience of learning a clinical skill in Iran. The present findings seem to be consistent with other studies in Iran 14,15,24 which found the mismatch between the number of students in training groups and clinical training resources was problematic for midwifery students. Sending students to extra non-educational clinical placements without support of a clinical trainer in order to achieve the 80 required births, could not be a good solution. Indeed, as according to the International Confederation of Midwifery (ICM), the presence of clinical trainer in clinical placements is essential for midwifery learning.1 Like that was identified in Blåka’s25 study, absence of trainer not only imposed significant stress on our participants, but also deprived them from receiving feedback from the trainers that is essential for clinical education. This decreased the clinical learning outcomes for the participants.
study
100.0